00:00:00.001 Started by upstream project "autotest-nightly" build number 4134 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3496 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.048 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.068 Fetching changes from the remote Git repository 00:00:00.070 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.099 Using shallow fetch with depth 1 00:00:00.099 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.154 > git --version # 'git version 2.39.2' 00:00:00.154 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.723 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.734 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.747 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:03.747 > git config core.sparsecheckout # timeout=10 00:00:03.757 > git read-tree -mu HEAD # timeout=10 00:00:03.774 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:03.792 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:03.792 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:03.894 [Pipeline] Start of Pipeline 00:00:03.907 [Pipeline] library 00:00:03.909 Loading library shm_lib@master 00:00:03.909 Library shm_lib@master is cached. Copying from home. 00:00:03.925 [Pipeline] node 00:00:03.947 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:03.949 [Pipeline] { 00:00:03.957 [Pipeline] catchError 00:00:03.958 [Pipeline] { 00:00:03.970 [Pipeline] wrap 00:00:03.979 [Pipeline] { 00:00:03.987 [Pipeline] stage 00:00:03.989 [Pipeline] { (Prologue) 00:00:04.007 [Pipeline] echo 00:00:04.009 Node: VM-host-SM38 00:00:04.015 [Pipeline] cleanWs 00:00:04.027 [WS-CLEANUP] Deleting project workspace... 00:00:04.027 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.034 [WS-CLEANUP] done 00:00:04.225 [Pipeline] setCustomBuildProperty 00:00:04.302 [Pipeline] httpRequest 00:00:05.116 [Pipeline] echo 00:00:05.117 Sorcerer 10.211.164.101 is alive 00:00:05.125 [Pipeline] retry 00:00:05.127 [Pipeline] { 00:00:05.136 [Pipeline] httpRequest 00:00:05.141 HttpMethod: GET 00:00:05.141 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:05.142 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:05.143 Response Code: HTTP/1.1 200 OK 00:00:05.143 Success: Status code 200 is in the accepted range: 200,404 00:00:05.144 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.373 [Pipeline] } 00:00:06.385 [Pipeline] // retry 00:00:06.389 [Pipeline] sh 00:00:06.671 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.688 [Pipeline] httpRequest 00:00:07.244 [Pipeline] echo 00:00:07.245 Sorcerer 10.211.164.101 is alive 00:00:07.254 [Pipeline] retry 00:00:07.257 [Pipeline] { 00:00:07.269 [Pipeline] httpRequest 00:00:07.274 HttpMethod: GET 00:00:07.274 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:07.275 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:07.279 Response Code: HTTP/1.1 200 OK 00:00:07.280 Success: Status code 200 is in the accepted range: 200,404 00:00:07.281 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:25.825 [Pipeline] } 00:01:25.842 [Pipeline] // retry 00:01:25.849 [Pipeline] sh 00:01:26.136 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:28.681 [Pipeline] sh 00:01:28.959 + git -C spdk log --oneline -n5 00:01:28.959 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:28.959 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:28.959 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:28.959 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:28.959 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:28.975 [Pipeline] writeFile 00:01:28.989 [Pipeline] sh 00:01:29.270 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.284 [Pipeline] sh 00:01:29.568 + cat autorun-spdk.conf 00:01:29.568 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.568 SPDK_TEST_NVME=1 00:01:29.568 SPDK_TEST_FTL=1 00:01:29.568 SPDK_TEST_ISAL=1 00:01:29.568 SPDK_RUN_ASAN=1 00:01:29.568 SPDK_RUN_UBSAN=1 00:01:29.568 SPDK_TEST_XNVME=1 00:01:29.568 SPDK_TEST_NVME_FDP=1 00:01:29.568 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.576 RUN_NIGHTLY=1 00:01:29.578 [Pipeline] } 00:01:29.592 [Pipeline] // stage 00:01:29.606 [Pipeline] stage 00:01:29.608 [Pipeline] { (Run VM) 00:01:29.621 [Pipeline] sh 00:01:29.907 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.907 + echo 'Start stage prepare_nvme.sh' 00:01:29.907 Start stage prepare_nvme.sh 00:01:29.907 + [[ -n 3 ]] 00:01:29.907 + disk_prefix=ex3 00:01:29.907 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:29.907 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:29.907 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:29.907 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.907 ++ SPDK_TEST_NVME=1 00:01:29.907 ++ SPDK_TEST_FTL=1 00:01:29.907 ++ SPDK_TEST_ISAL=1 00:01:29.907 ++ SPDK_RUN_ASAN=1 00:01:29.907 ++ SPDK_RUN_UBSAN=1 00:01:29.907 ++ SPDK_TEST_XNVME=1 00:01:29.907 ++ SPDK_TEST_NVME_FDP=1 00:01:29.907 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.907 ++ RUN_NIGHTLY=1 00:01:29.907 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:29.907 + nvme_files=() 00:01:29.907 + declare -A nvme_files 00:01:29.907 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.907 + nvme_files['nvme.img']=5G 00:01:29.907 + nvme_files['nvme-cmb.img']=5G 00:01:29.907 + nvme_files['nvme-multi0.img']=4G 00:01:29.907 + nvme_files['nvme-multi1.img']=4G 00:01:29.907 + nvme_files['nvme-multi2.img']=4G 00:01:29.907 + nvme_files['nvme-openstack.img']=8G 00:01:29.907 + nvme_files['nvme-zns.img']=5G 00:01:29.907 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.907 + (( SPDK_TEST_FTL == 1 )) 00:01:29.907 + nvme_files["nvme-ftl.img"]=6G 00:01:29.907 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.907 + nvme_files["nvme-fdp.img"]=1G 00:01:29.907 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.907 + for nvme in "${!nvme_files[@]}" 00:01:29.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:29.907 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.166 + for nvme in "${!nvme_files[@]}" 00:01:30.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:30.166 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:30.166 + for nvme in "${!nvme_files[@]}" 00:01:30.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:30.166 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:30.166 + for nvme in "${!nvme_files[@]}" 00:01:30.166 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:30.166 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.166 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:30.166 + echo 'End stage prepare_nvme.sh' 00:01:30.166 End stage prepare_nvme.sh 00:01:30.176 [Pipeline] sh 00:01:30.454 + DISTRO=fedora39 00:01:30.454 + CPUS=10 00:01:30.454 + RAM=12288 00:01:30.454 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.454 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:30.454 00:01:30.454 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:30.454 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:30.454 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:30.454 HELP=0 00:01:30.454 DRY_RUN=0 00:01:30.454 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:30.454 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:30.454 NVME_AUTO_CREATE=0 00:01:30.454 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:30.454 NVME_CMB=,,,, 00:01:30.454 NVME_PMR=,,,, 00:01:30.454 NVME_ZNS=,,,, 00:01:30.454 NVME_MS=true,,,, 00:01:30.454 NVME_FDP=,,,on, 00:01:30.455 SPDK_VAGRANT_DISTRO=fedora39 00:01:30.455 SPDK_VAGRANT_VMCPU=10 00:01:30.455 SPDK_VAGRANT_VMRAM=12288 00:01:30.455 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.455 SPDK_VAGRANT_HTTP_PROXY= 00:01:30.455 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.455 SPDK_OPENSTACK_NETWORK=0 00:01:30.455 VAGRANT_PACKAGE_BOX=0 00:01:30.455 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:30.455 FORCE_DISTRO=true 00:01:30.455 VAGRANT_BOX_VERSION= 00:01:30.455 EXTRA_VAGRANTFILES= 00:01:30.455 NIC_MODEL=e1000 00:01:30.455 00:01:30.455 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:30.455 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:32.989 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.247 ==> default: Creating image (snapshot of base box volume). 00:01:33.507 ==> default: Creating domain with the following settings... 00:01:33.507 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727725517_ade33c65e6559f286653 00:01:33.507 ==> default: -- Domain type: kvm 00:01:33.507 ==> default: -- Cpus: 10 00:01:33.507 ==> default: -- Feature: acpi 00:01:33.507 ==> default: -- Feature: apic 00:01:33.507 ==> default: -- Feature: pae 00:01:33.507 ==> default: -- Memory: 12288M 00:01:33.507 ==> default: -- Memory Backing: hugepages: 00:01:33.507 ==> default: -- Management MAC: 00:01:33.507 ==> default: -- Loader: 00:01:33.507 ==> default: -- Nvram: 00:01:33.507 ==> default: -- Base box: spdk/fedora39 00:01:33.507 ==> default: -- Storage pool: default 00:01:33.507 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727725517_ade33c65e6559f286653.img (20G) 00:01:33.507 ==> default: -- Volume Cache: default 00:01:33.507 ==> default: -- Kernel: 00:01:33.507 ==> default: -- Initrd: 00:01:33.507 ==> default: -- Graphics Type: vnc 00:01:33.507 ==> default: -- Graphics Port: -1 00:01:33.507 ==> default: -- Graphics IP: 127.0.0.1 00:01:33.507 ==> default: -- Graphics Password: Not defined 00:01:33.507 ==> default: -- Video Type: cirrus 00:01:33.507 ==> default: -- Video VRAM: 9216 00:01:33.507 ==> default: -- Sound Type: 00:01:33.507 ==> default: -- Keymap: en-us 00:01:33.507 ==> default: -- TPM Path: 00:01:33.507 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:33.507 ==> default: -- Command line args: 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:33.507 ==> default: -> value=-drive, 00:01:33.507 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:33.507 ==> default: -> value=-device, 00:01:33.507 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.507 ==> default: Creating shared folders metadata... 00:01:33.507 ==> default: Starting domain. 00:01:36.048 ==> default: Waiting for domain to get an IP address... 00:01:58.000 ==> default: Waiting for SSH to become available... 00:01:58.261 ==> default: Configuring and enabling network interfaces... 00:02:02.469 default: SSH address: 192.168.121.64:22 00:02:02.469 default: SSH username: vagrant 00:02:02.469 default: SSH auth method: private key 00:02:04.385 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:12.528 ==> default: Mounting SSHFS shared folder... 00:02:14.444 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:14.444 ==> default: Checking Mount.. 00:02:15.830 ==> default: Folder Successfully Mounted! 00:02:15.830 00:02:15.830 SUCCESS! 00:02:15.830 00:02:15.830 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.830 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.830 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.830 00:02:15.840 [Pipeline] } 00:02:15.857 [Pipeline] // stage 00:02:15.865 [Pipeline] dir 00:02:15.865 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:15.867 [Pipeline] { 00:02:15.882 [Pipeline] catchError 00:02:15.884 [Pipeline] { 00:02:15.897 [Pipeline] sh 00:02:16.222 + vagrant ssh-config --host vagrant 00:02:16.222 + sed -ne '/^Host/,$p' 00:02:16.222 + tee ssh_conf 00:02:18.764 Host vagrant 00:02:18.764 HostName 192.168.121.64 00:02:18.764 User vagrant 00:02:18.764 Port 22 00:02:18.764 UserKnownHostsFile /dev/null 00:02:18.764 StrictHostKeyChecking no 00:02:18.764 PasswordAuthentication no 00:02:18.764 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:18.764 IdentitiesOnly yes 00:02:18.764 LogLevel FATAL 00:02:18.764 ForwardAgent yes 00:02:18.764 ForwardX11 yes 00:02:18.764 00:02:18.780 [Pipeline] withEnv 00:02:18.782 [Pipeline] { 00:02:18.796 [Pipeline] sh 00:02:19.079 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:19.079 source /etc/os-release 00:02:19.079 [[ -e /image.version ]] && img=$(< /image.version) 00:02:19.079 # Minimal, systemd-like check. 00:02:19.079 if [[ -e /.dockerenv ]]; then 00:02:19.079 # Clear garbage from the node'\''s name: 00:02:19.079 # agt-er_autotest_547-896 -> autotest_547-896 00:02:19.079 # $HOSTNAME is the actual container id 00:02:19.079 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:19.079 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:19.079 # We can assume this is a mount from a host where container is running, 00:02:19.079 # so fetch its hostname to easily identify the target swarm worker. 00:02:19.079 container="$(< /etc/hostname) ($agent)" 00:02:19.079 else 00:02:19.079 # Fallback 00:02:19.079 container=$agent 00:02:19.079 fi 00:02:19.079 fi 00:02:19.079 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:19.079 ' 00:02:19.353 [Pipeline] } 00:02:19.368 [Pipeline] // withEnv 00:02:19.377 [Pipeline] setCustomBuildProperty 00:02:19.392 [Pipeline] stage 00:02:19.394 [Pipeline] { (Tests) 00:02:19.411 [Pipeline] sh 00:02:19.696 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.972 [Pipeline] sh 00:02:20.257 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:20.534 [Pipeline] timeout 00:02:20.534 Timeout set to expire in 50 min 00:02:20.536 [Pipeline] { 00:02:20.550 [Pipeline] sh 00:02:20.833 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:21.405 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:21.420 [Pipeline] sh 00:02:21.708 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:21.984 [Pipeline] sh 00:02:22.269 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:22.545 [Pipeline] sh 00:02:22.826 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:23.087 ++ readlink -f spdk_repo 00:02:23.088 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:23.088 + [[ -n /home/vagrant/spdk_repo ]] 00:02:23.088 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:23.088 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:23.088 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:23.088 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:23.088 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:23.088 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:23.088 + cd /home/vagrant/spdk_repo 00:02:23.088 + source /etc/os-release 00:02:23.088 ++ NAME='Fedora Linux' 00:02:23.088 ++ VERSION='39 (Cloud Edition)' 00:02:23.088 ++ ID=fedora 00:02:23.088 ++ VERSION_ID=39 00:02:23.088 ++ VERSION_CODENAME= 00:02:23.088 ++ PLATFORM_ID=platform:f39 00:02:23.088 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:23.088 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.088 ++ LOGO=fedora-logo-icon 00:02:23.088 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:23.088 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.088 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:23.088 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.088 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.088 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.088 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:23.088 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.088 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:23.088 ++ SUPPORT_END=2024-11-12 00:02:23.088 ++ VARIANT='Cloud Edition' 00:02:23.088 ++ VARIANT_ID=cloud 00:02:23.088 + uname -a 00:02:23.088 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:23.088 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:23.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:23.610 Hugepages 00:02:23.610 node hugesize free / total 00:02:23.610 node0 1048576kB 0 / 0 00:02:23.610 node0 2048kB 0 / 0 00:02:23.610 00:02:23.610 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.610 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:23.870 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:23.870 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:23.870 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:23.870 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:23.870 + rm -f /tmp/spdk-ld-path 00:02:23.870 + source autorun-spdk.conf 00:02:23.870 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.870 ++ SPDK_TEST_NVME=1 00:02:23.870 ++ SPDK_TEST_FTL=1 00:02:23.870 ++ SPDK_TEST_ISAL=1 00:02:23.870 ++ SPDK_RUN_ASAN=1 00:02:23.870 ++ SPDK_RUN_UBSAN=1 00:02:23.870 ++ SPDK_TEST_XNVME=1 00:02:23.870 ++ SPDK_TEST_NVME_FDP=1 00:02:23.870 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.870 ++ RUN_NIGHTLY=1 00:02:23.870 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:23.870 + [[ -n '' ]] 00:02:23.870 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:23.870 + for M in /var/spdk/build-*-manifest.txt 00:02:23.870 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:23.870 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.870 + for M in /var/spdk/build-*-manifest.txt 00:02:23.870 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.870 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.870 + for M in /var/spdk/build-*-manifest.txt 00:02:23.870 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.870 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.870 ++ uname 00:02:23.870 + [[ Linux == \L\i\n\u\x ]] 00:02:23.870 + sudo dmesg -T 00:02:23.870 + sudo dmesg --clear 00:02:23.870 + dmesg_pid=5027 00:02:23.870 + [[ Fedora Linux == FreeBSD ]] 00:02:23.870 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.870 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.871 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.871 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.871 + sudo dmesg -Tw 00:02:23.871 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.871 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.871 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.871 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.871 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.871 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.871 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.871 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.871 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.871 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.871 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.871 Test configuration: 00:02:23.871 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.871 SPDK_TEST_NVME=1 00:02:23.871 SPDK_TEST_FTL=1 00:02:23.871 SPDK_TEST_ISAL=1 00:02:23.871 SPDK_RUN_ASAN=1 00:02:23.871 SPDK_RUN_UBSAN=1 00:02:23.871 SPDK_TEST_XNVME=1 00:02:23.871 SPDK_TEST_NVME_FDP=1 00:02:23.871 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.131 RUN_NIGHTLY=1 19:46:08 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:24.131 19:46:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:24.131 19:46:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:24.131 19:46:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.131 19:46:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.131 19:46:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.131 19:46:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.131 19:46:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.131 19:46:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.131 19:46:08 -- paths/export.sh@5 -- $ export PATH 00:02:24.131 19:46:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.131 19:46:08 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:24.131 19:46:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:24.131 19:46:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727725568.XXXXXX 00:02:24.131 19:46:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727725568.mFj7Lg 00:02:24.131 19:46:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:24.131 19:46:08 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:24.131 19:46:08 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:24.131 19:46:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:24.131 19:46:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.131 19:46:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:24.131 19:46:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:24.131 19:46:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.131 19:46:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:24.131 19:46:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:24.131 19:46:08 -- pm/common@17 -- $ local monitor 00:02:24.131 19:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.131 19:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.131 19:46:08 -- pm/common@25 -- $ sleep 1 00:02:24.131 19:46:08 -- pm/common@21 -- $ date +%s 00:02:24.131 19:46:08 -- pm/common@21 -- $ date +%s 00:02:24.131 19:46:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727725568 00:02:24.131 19:46:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727725568 00:02:24.131 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727725568_collect-cpu-load.pm.log 00:02:24.131 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727725568_collect-vmstat.pm.log 00:02:25.074 19:46:09 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:25.074 19:46:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.075 19:46:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.075 19:46:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.075 19:46:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.075 Mon Sep 30 07:46:09 PM UTC 2024 00:02:25.075 19:46:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.075 v25.01-pre-17-g09cc66129 00:02:25.075 19:46:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:25.075 19:46:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:25.075 19:46:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:25.075 19:46:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.075 19:46:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.075 ************************************ 00:02:25.075 START TEST asan 00:02:25.075 ************************************ 00:02:25.075 using asan 00:02:25.075 ************************************ 00:02:25.075 END TEST asan 00:02:25.075 ************************************ 00:02:25.075 19:46:09 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:25.075 00:02:25.075 real 0m0.000s 00:02:25.075 user 0m0.000s 00:02:25.075 sys 0m0.000s 00:02:25.075 19:46:09 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.075 19:46:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.075 19:46:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.075 19:46:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.075 19:46:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:25.075 19:46:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.075 19:46:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.075 ************************************ 00:02:25.075 START TEST ubsan 00:02:25.075 ************************************ 00:02:25.075 using ubsan 00:02:25.075 19:46:09 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:25.075 00:02:25.075 real 0m0.000s 00:02:25.075 user 0m0.000s 00:02:25.075 sys 0m0.000s 00:02:25.075 19:46:09 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.075 19:46:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.075 ************************************ 00:02:25.075 END TEST ubsan 00:02:25.075 ************************************ 00:02:25.337 19:46:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:25.337 19:46:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.337 19:46:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.337 19:46:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:25.337 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.337 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.912 Using 'verbs' RDMA provider 00:02:36.849 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.962 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:47.220 Creating mk/config.mk...done. 00:02:47.220 Creating mk/cc.flags.mk...done. 00:02:47.220 Type 'make' to build. 00:02:47.220 19:46:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:47.220 19:46:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:47.220 19:46:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.220 19:46:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.220 ************************************ 00:02:47.220 START TEST make 00:02:47.220 ************************************ 00:02:47.220 19:46:31 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:47.220 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:47.220 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:47.220 meson setup builddir \ 00:02:47.220 -Dwith-libaio=enabled \ 00:02:47.220 -Dwith-liburing=enabled \ 00:02:47.220 -Dwith-libvfn=disabled \ 00:02:47.220 -Dwith-spdk=false && \ 00:02:47.220 meson compile -C builddir && \ 00:02:47.220 cd -) 00:02:47.478 make[1]: Nothing to be done for 'all'. 00:02:49.376 The Meson build system 00:02:49.376 Version: 1.5.0 00:02:49.376 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:49.376 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:49.376 Build type: native build 00:02:49.376 Project name: xnvme 00:02:49.376 Project version: 0.7.3 00:02:49.376 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.377 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.377 Host machine cpu family: x86_64 00:02:49.377 Host machine cpu: x86_64 00:02:49.377 Message: host_machine.system: linux 00:02:49.377 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:49.377 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:49.377 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:49.377 Run-time dependency threads found: YES 00:02:49.377 Has header "setupapi.h" : NO 00:02:49.377 Has header "linux/blkzoned.h" : YES 00:02:49.377 Has header "linux/blkzoned.h" : YES (cached) 00:02:49.377 Has header "libaio.h" : YES 00:02:49.377 Library aio found: YES 00:02:49.377 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.377 Run-time dependency liburing found: YES 2.2 00:02:49.377 Dependency libvfn skipped: feature with-libvfn disabled 00:02:49.377 Run-time dependency appleframeworks found: NO (tried framework) 00:02:49.377 Run-time dependency appleframeworks found: NO (tried framework) 00:02:49.377 Configuring xnvme_config.h using configuration 00:02:49.377 Configuring xnvme.spec using configuration 00:02:49.377 Run-time dependency bash-completion found: YES 2.11 00:02:49.377 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:49.377 Program cp found: YES (/usr/bin/cp) 00:02:49.377 Has header "winsock2.h" : NO 00:02:49.377 Has header "dbghelp.h" : NO 00:02:49.377 Library rpcrt4 found: NO 00:02:49.377 Library rt found: YES 00:02:49.377 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:49.377 Found CMake: /usr/bin/cmake (3.27.7) 00:02:49.377 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:49.377 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:49.377 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:49.377 Build targets in project: 32 00:02:49.377 00:02:49.377 xnvme 0.7.3 00:02:49.377 00:02:49.377 User defined options 00:02:49.377 with-libaio : enabled 00:02:49.377 with-liburing: enabled 00:02:49.377 with-libvfn : disabled 00:02:49.377 with-spdk : false 00:02:49.377 00:02:49.377 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.946 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:49.946 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:49.946 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:49.947 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:49.947 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:49.947 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:49.947 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:49.947 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:49.947 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:49.947 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:49.947 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:49.947 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:49.947 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:49.947 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:49.947 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:49.947 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:49.947 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:49.947 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:49.947 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:49.947 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:49.947 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:50.207 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:50.207 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:50.207 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:50.207 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:50.207 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:50.207 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:50.207 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:50.207 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:50.207 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:50.207 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:50.207 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:50.207 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:50.207 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:50.207 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:50.207 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:50.207 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:50.207 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:50.207 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:50.207 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:50.207 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:50.207 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:50.207 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:50.207 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:50.207 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:50.207 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:50.207 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:50.207 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:50.207 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:50.207 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:50.207 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:50.207 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:50.207 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:50.207 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:50.207 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:50.207 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:50.207 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:50.207 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:50.207 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:50.207 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:50.466 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:50.466 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:50.466 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:50.466 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:50.466 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:50.466 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:50.466 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:50.466 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:50.466 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:50.466 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:50.466 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:50.466 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:50.466 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:50.466 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:50.466 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:50.466 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:50.466 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:50.466 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:50.466 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:50.466 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:50.724 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:50.724 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:50.724 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:50.724 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:50.724 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:50.724 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:50.724 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:50.724 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:50.724 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:50.724 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:50.724 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:50.724 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:50.724 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:50.724 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:50.724 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:50.724 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:50.724 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:50.724 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:50.724 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:50.724 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:50.724 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:50.724 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:50.724 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:50.724 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:50.724 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:50.724 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:50.983 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:50.983 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:50.983 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:50.983 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:50.983 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:50.983 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:50.983 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:50.983 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:50.983 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:50.983 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:50.983 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:50.983 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:50.983 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:50.983 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:50.983 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:50.983 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:50.983 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:50.983 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:50.983 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:50.983 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:50.983 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:50.983 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:50.983 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:50.983 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:50.983 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:50.983 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:50.983 [132/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:51.242 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:51.242 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:51.242 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:51.242 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:51.242 [137/203] Linking target lib/libxnvme.so 00:02:51.242 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:51.242 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:51.242 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:51.242 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:51.242 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:51.242 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:51.242 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:51.242 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:51.242 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:51.242 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:51.242 [148/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:51.242 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:51.242 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:51.242 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:51.501 [152/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:51.501 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:51.501 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:51.501 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:51.501 [156/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:51.501 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:51.501 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:51.501 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:51.501 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:51.501 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:51.501 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:51.501 [163/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:51.501 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:51.501 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:51.501 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:51.501 [167/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:51.501 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:51.501 [169/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:51.501 [170/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:51.760 [171/203] Linking static target lib/libxnvme.a 00:02:51.760 [172/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:51.760 [173/203] Linking target tests/xnvme_tests_buf 00:02:51.760 [174/203] Linking target tests/xnvme_tests_lblk 00:02:51.760 [175/203] Linking target tests/xnvme_tests_xnvme_file 00:02:51.760 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:51.760 [177/203] Linking target tests/xnvme_tests_enum 00:02:51.760 [178/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:51.760 [179/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:51.760 [180/203] Linking target tests/xnvme_tests_ioworker 00:02:51.760 [181/203] Linking target tests/xnvme_tests_cli 00:02:51.760 [182/203] Linking target tests/xnvme_tests_scc 00:02:51.760 [183/203] Linking target tests/xnvme_tests_znd_state 00:02:51.760 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:51.760 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:51.760 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:51.760 [187/203] Linking target tests/xnvme_tests_kvs 00:02:51.760 [188/203] Linking target tools/lblk 00:02:51.760 [189/203] Linking target tools/xnvme_file 00:02:51.760 [190/203] Linking target tests/xnvme_tests_map 00:02:51.760 [191/203] Linking target tools/kvs 00:02:51.760 [192/203] Linking target tools/xdd 00:02:51.760 [193/203] Linking target examples/xnvme_dev 00:02:51.760 [194/203] Linking target tools/zoned 00:02:51.760 [195/203] Linking target examples/xnvme_enum 00:02:51.760 [196/203] Linking target examples/xnvme_hello 00:02:51.760 [197/203] Linking target examples/xnvme_io_async 00:02:51.760 [198/203] Linking target examples/zoned_io_async 00:02:51.760 [199/203] Linking target examples/zoned_io_sync 00:02:51.760 [200/203] Linking target examples/xnvme_single_sync 00:02:51.760 [201/203] Linking target examples/xnvme_single_async 00:02:51.760 [202/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:51.760 [203/203] Linking target tools/xnvme 00:02:51.760 INFO: autodetecting backend as ninja 00:02:51.760 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:51.760 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:58.346 The Meson build system 00:02:58.346 Version: 1.5.0 00:02:58.346 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:58.346 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:58.346 Build type: native build 00:02:58.346 Program cat found: YES (/usr/bin/cat) 00:02:58.346 Project name: DPDK 00:02:58.346 Project version: 24.03.0 00:02:58.346 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:58.346 C linker for the host machine: cc ld.bfd 2.40-14 00:02:58.346 Host machine cpu family: x86_64 00:02:58.346 Host machine cpu: x86_64 00:02:58.346 Message: ## Building in Developer Mode ## 00:02:58.346 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:58.346 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:58.346 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:58.346 Program python3 found: YES (/usr/bin/python3) 00:02:58.346 Program cat found: YES (/usr/bin/cat) 00:02:58.346 Compiler for C supports arguments -march=native: YES 00:02:58.346 Checking for size of "void *" : 8 00:02:58.346 Checking for size of "void *" : 8 (cached) 00:02:58.346 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:58.346 Library m found: YES 00:02:58.346 Library numa found: YES 00:02:58.346 Has header "numaif.h" : YES 00:02:58.346 Library fdt found: NO 00:02:58.346 Library execinfo found: NO 00:02:58.346 Has header "execinfo.h" : YES 00:02:58.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:58.346 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:58.346 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:58.346 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:58.346 Run-time dependency openssl found: YES 3.1.1 00:02:58.346 Run-time dependency libpcap found: YES 1.10.4 00:02:58.346 Has header "pcap.h" with dependency libpcap: YES 00:02:58.346 Compiler for C supports arguments -Wcast-qual: YES 00:02:58.346 Compiler for C supports arguments -Wdeprecated: YES 00:02:58.346 Compiler for C supports arguments -Wformat: YES 00:02:58.346 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:58.346 Compiler for C supports arguments -Wformat-security: NO 00:02:58.347 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:58.347 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:58.347 Compiler for C supports arguments -Wnested-externs: YES 00:02:58.347 Compiler for C supports arguments -Wold-style-definition: YES 00:02:58.347 Compiler for C supports arguments -Wpointer-arith: YES 00:02:58.347 Compiler for C supports arguments -Wsign-compare: YES 00:02:58.347 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:58.347 Compiler for C supports arguments -Wundef: YES 00:02:58.347 Compiler for C supports arguments -Wwrite-strings: YES 00:02:58.347 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:58.347 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:58.347 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:58.347 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:58.347 Program objdump found: YES (/usr/bin/objdump) 00:02:58.347 Compiler for C supports arguments -mavx512f: YES 00:02:58.347 Checking if "AVX512 checking" compiles: YES 00:02:58.347 Fetching value of define "__SSE4_2__" : 1 00:02:58.347 Fetching value of define "__AES__" : 1 00:02:58.347 Fetching value of define "__AVX__" : 1 00:02:58.347 Fetching value of define "__AVX2__" : 1 00:02:58.347 Fetching value of define "__AVX512BW__" : 1 00:02:58.347 Fetching value of define "__AVX512CD__" : 1 00:02:58.347 Fetching value of define "__AVX512DQ__" : 1 00:02:58.347 Fetching value of define "__AVX512F__" : 1 00:02:58.347 Fetching value of define "__AVX512VL__" : 1 00:02:58.347 Fetching value of define "__PCLMUL__" : 1 00:02:58.347 Fetching value of define "__RDRND__" : 1 00:02:58.347 Fetching value of define "__RDSEED__" : 1 00:02:58.347 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:58.347 Fetching value of define "__znver1__" : (undefined) 00:02:58.347 Fetching value of define "__znver2__" : (undefined) 00:02:58.347 Fetching value of define "__znver3__" : (undefined) 00:02:58.347 Fetching value of define "__znver4__" : (undefined) 00:02:58.347 Library asan found: YES 00:02:58.347 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:58.347 Message: lib/log: Defining dependency "log" 00:02:58.347 Message: lib/kvargs: Defining dependency "kvargs" 00:02:58.347 Message: lib/telemetry: Defining dependency "telemetry" 00:02:58.347 Library rt found: YES 00:02:58.347 Checking for function "getentropy" : NO 00:02:58.347 Message: lib/eal: Defining dependency "eal" 00:02:58.347 Message: lib/ring: Defining dependency "ring" 00:02:58.347 Message: lib/rcu: Defining dependency "rcu" 00:02:58.347 Message: lib/mempool: Defining dependency "mempool" 00:02:58.347 Message: lib/mbuf: Defining dependency "mbuf" 00:02:58.347 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:58.347 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.347 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:58.347 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:58.347 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:58.347 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:58.347 Compiler for C supports arguments -mpclmul: YES 00:02:58.347 Compiler for C supports arguments -maes: YES 00:02:58.347 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:58.347 Compiler for C supports arguments -mavx512bw: YES 00:02:58.347 Compiler for C supports arguments -mavx512dq: YES 00:02:58.347 Compiler for C supports arguments -mavx512vl: YES 00:02:58.347 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:58.347 Compiler for C supports arguments -mavx2: YES 00:02:58.347 Compiler for C supports arguments -mavx: YES 00:02:58.347 Message: lib/net: Defining dependency "net" 00:02:58.347 Message: lib/meter: Defining dependency "meter" 00:02:58.347 Message: lib/ethdev: Defining dependency "ethdev" 00:02:58.347 Message: lib/pci: Defining dependency "pci" 00:02:58.347 Message: lib/cmdline: Defining dependency "cmdline" 00:02:58.347 Message: lib/hash: Defining dependency "hash" 00:02:58.347 Message: lib/timer: Defining dependency "timer" 00:02:58.347 Message: lib/compressdev: Defining dependency "compressdev" 00:02:58.347 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:58.347 Message: lib/dmadev: Defining dependency "dmadev" 00:02:58.347 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:58.347 Message: lib/power: Defining dependency "power" 00:02:58.347 Message: lib/reorder: Defining dependency "reorder" 00:02:58.347 Message: lib/security: Defining dependency "security" 00:02:58.347 Has header "linux/userfaultfd.h" : YES 00:02:58.347 Has header "linux/vduse.h" : YES 00:02:58.347 Message: lib/vhost: Defining dependency "vhost" 00:02:58.347 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:58.347 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:58.347 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:58.347 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:58.347 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:58.347 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:58.347 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:58.347 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:58.347 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:58.347 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:58.347 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:58.347 Configuring doxy-api-html.conf using configuration 00:02:58.347 Configuring doxy-api-man.conf using configuration 00:02:58.347 Program mandb found: YES (/usr/bin/mandb) 00:02:58.347 Program sphinx-build found: NO 00:02:58.347 Configuring rte_build_config.h using configuration 00:02:58.347 Message: 00:02:58.347 ================= 00:02:58.347 Applications Enabled 00:02:58.347 ================= 00:02:58.347 00:02:58.347 apps: 00:02:58.347 00:02:58.347 00:02:58.347 Message: 00:02:58.347 ================= 00:02:58.347 Libraries Enabled 00:02:58.347 ================= 00:02:58.347 00:02:58.347 libs: 00:02:58.347 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:58.347 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:58.347 cryptodev, dmadev, power, reorder, security, vhost, 00:02:58.347 00:02:58.347 Message: 00:02:58.347 =============== 00:02:58.347 Drivers Enabled 00:02:58.347 =============== 00:02:58.347 00:02:58.347 common: 00:02:58.347 00:02:58.347 bus: 00:02:58.347 pci, vdev, 00:02:58.347 mempool: 00:02:58.347 ring, 00:02:58.347 dma: 00:02:58.347 00:02:58.347 net: 00:02:58.347 00:02:58.347 crypto: 00:02:58.347 00:02:58.347 compress: 00:02:58.347 00:02:58.347 vdpa: 00:02:58.347 00:02:58.347 00:02:58.347 Message: 00:02:58.347 ================= 00:02:58.347 Content Skipped 00:02:58.347 ================= 00:02:58.347 00:02:58.347 apps: 00:02:58.347 dumpcap: explicitly disabled via build config 00:02:58.347 graph: explicitly disabled via build config 00:02:58.347 pdump: explicitly disabled via build config 00:02:58.347 proc-info: explicitly disabled via build config 00:02:58.347 test-acl: explicitly disabled via build config 00:02:58.347 test-bbdev: explicitly disabled via build config 00:02:58.347 test-cmdline: explicitly disabled via build config 00:02:58.347 test-compress-perf: explicitly disabled via build config 00:02:58.347 test-crypto-perf: explicitly disabled via build config 00:02:58.347 test-dma-perf: explicitly disabled via build config 00:02:58.347 test-eventdev: explicitly disabled via build config 00:02:58.347 test-fib: explicitly disabled via build config 00:02:58.347 test-flow-perf: explicitly disabled via build config 00:02:58.347 test-gpudev: explicitly disabled via build config 00:02:58.347 test-mldev: explicitly disabled via build config 00:02:58.347 test-pipeline: explicitly disabled via build config 00:02:58.347 test-pmd: explicitly disabled via build config 00:02:58.347 test-regex: explicitly disabled via build config 00:02:58.347 test-sad: explicitly disabled via build config 00:02:58.347 test-security-perf: explicitly disabled via build config 00:02:58.347 00:02:58.347 libs: 00:02:58.347 argparse: explicitly disabled via build config 00:02:58.347 metrics: explicitly disabled via build config 00:02:58.347 acl: explicitly disabled via build config 00:02:58.347 bbdev: explicitly disabled via build config 00:02:58.347 bitratestats: explicitly disabled via build config 00:02:58.347 bpf: explicitly disabled via build config 00:02:58.347 cfgfile: explicitly disabled via build config 00:02:58.347 distributor: explicitly disabled via build config 00:02:58.347 efd: explicitly disabled via build config 00:02:58.347 eventdev: explicitly disabled via build config 00:02:58.347 dispatcher: explicitly disabled via build config 00:02:58.347 gpudev: explicitly disabled via build config 00:02:58.347 gro: explicitly disabled via build config 00:02:58.347 gso: explicitly disabled via build config 00:02:58.347 ip_frag: explicitly disabled via build config 00:02:58.347 jobstats: explicitly disabled via build config 00:02:58.347 latencystats: explicitly disabled via build config 00:02:58.347 lpm: explicitly disabled via build config 00:02:58.347 member: explicitly disabled via build config 00:02:58.347 pcapng: explicitly disabled via build config 00:02:58.347 rawdev: explicitly disabled via build config 00:02:58.347 regexdev: explicitly disabled via build config 00:02:58.347 mldev: explicitly disabled via build config 00:02:58.347 rib: explicitly disabled via build config 00:02:58.347 sched: explicitly disabled via build config 00:02:58.347 stack: explicitly disabled via build config 00:02:58.347 ipsec: explicitly disabled via build config 00:02:58.347 pdcp: explicitly disabled via build config 00:02:58.347 fib: explicitly disabled via build config 00:02:58.347 port: explicitly disabled via build config 00:02:58.347 pdump: explicitly disabled via build config 00:02:58.347 table: explicitly disabled via build config 00:02:58.347 pipeline: explicitly disabled via build config 00:02:58.347 graph: explicitly disabled via build config 00:02:58.347 node: explicitly disabled via build config 00:02:58.347 00:02:58.347 drivers: 00:02:58.348 common/cpt: not in enabled drivers build config 00:02:58.348 common/dpaax: not in enabled drivers build config 00:02:58.348 common/iavf: not in enabled drivers build config 00:02:58.348 common/idpf: not in enabled drivers build config 00:02:58.348 common/ionic: not in enabled drivers build config 00:02:58.348 common/mvep: not in enabled drivers build config 00:02:58.348 common/octeontx: not in enabled drivers build config 00:02:58.348 bus/auxiliary: not in enabled drivers build config 00:02:58.348 bus/cdx: not in enabled drivers build config 00:02:58.348 bus/dpaa: not in enabled drivers build config 00:02:58.348 bus/fslmc: not in enabled drivers build config 00:02:58.348 bus/ifpga: not in enabled drivers build config 00:02:58.348 bus/platform: not in enabled drivers build config 00:02:58.348 bus/uacce: not in enabled drivers build config 00:02:58.348 bus/vmbus: not in enabled drivers build config 00:02:58.348 common/cnxk: not in enabled drivers build config 00:02:58.348 common/mlx5: not in enabled drivers build config 00:02:58.348 common/nfp: not in enabled drivers build config 00:02:58.348 common/nitrox: not in enabled drivers build config 00:02:58.348 common/qat: not in enabled drivers build config 00:02:58.348 common/sfc_efx: not in enabled drivers build config 00:02:58.348 mempool/bucket: not in enabled drivers build config 00:02:58.348 mempool/cnxk: not in enabled drivers build config 00:02:58.348 mempool/dpaa: not in enabled drivers build config 00:02:58.348 mempool/dpaa2: not in enabled drivers build config 00:02:58.348 mempool/octeontx: not in enabled drivers build config 00:02:58.348 mempool/stack: not in enabled drivers build config 00:02:58.348 dma/cnxk: not in enabled drivers build config 00:02:58.348 dma/dpaa: not in enabled drivers build config 00:02:58.348 dma/dpaa2: not in enabled drivers build config 00:02:58.348 dma/hisilicon: not in enabled drivers build config 00:02:58.348 dma/idxd: not in enabled drivers build config 00:02:58.348 dma/ioat: not in enabled drivers build config 00:02:58.348 dma/skeleton: not in enabled drivers build config 00:02:58.348 net/af_packet: not in enabled drivers build config 00:02:58.348 net/af_xdp: not in enabled drivers build config 00:02:58.348 net/ark: not in enabled drivers build config 00:02:58.348 net/atlantic: not in enabled drivers build config 00:02:58.348 net/avp: not in enabled drivers build config 00:02:58.348 net/axgbe: not in enabled drivers build config 00:02:58.348 net/bnx2x: not in enabled drivers build config 00:02:58.348 net/bnxt: not in enabled drivers build config 00:02:58.348 net/bonding: not in enabled drivers build config 00:02:58.348 net/cnxk: not in enabled drivers build config 00:02:58.348 net/cpfl: not in enabled drivers build config 00:02:58.348 net/cxgbe: not in enabled drivers build config 00:02:58.348 net/dpaa: not in enabled drivers build config 00:02:58.348 net/dpaa2: not in enabled drivers build config 00:02:58.348 net/e1000: not in enabled drivers build config 00:02:58.348 net/ena: not in enabled drivers build config 00:02:58.348 net/enetc: not in enabled drivers build config 00:02:58.348 net/enetfec: not in enabled drivers build config 00:02:58.348 net/enic: not in enabled drivers build config 00:02:58.348 net/failsafe: not in enabled drivers build config 00:02:58.348 net/fm10k: not in enabled drivers build config 00:02:58.348 net/gve: not in enabled drivers build config 00:02:58.348 net/hinic: not in enabled drivers build config 00:02:58.348 net/hns3: not in enabled drivers build config 00:02:58.348 net/i40e: not in enabled drivers build config 00:02:58.348 net/iavf: not in enabled drivers build config 00:02:58.348 net/ice: not in enabled drivers build config 00:02:58.348 net/idpf: not in enabled drivers build config 00:02:58.348 net/igc: not in enabled drivers build config 00:02:58.348 net/ionic: not in enabled drivers build config 00:02:58.348 net/ipn3ke: not in enabled drivers build config 00:02:58.348 net/ixgbe: not in enabled drivers build config 00:02:58.348 net/mana: not in enabled drivers build config 00:02:58.348 net/memif: not in enabled drivers build config 00:02:58.348 net/mlx4: not in enabled drivers build config 00:02:58.348 net/mlx5: not in enabled drivers build config 00:02:58.348 net/mvneta: not in enabled drivers build config 00:02:58.348 net/mvpp2: not in enabled drivers build config 00:02:58.348 net/netvsc: not in enabled drivers build config 00:02:58.348 net/nfb: not in enabled drivers build config 00:02:58.348 net/nfp: not in enabled drivers build config 00:02:58.348 net/ngbe: not in enabled drivers build config 00:02:58.348 net/null: not in enabled drivers build config 00:02:58.348 net/octeontx: not in enabled drivers build config 00:02:58.348 net/octeon_ep: not in enabled drivers build config 00:02:58.348 net/pcap: not in enabled drivers build config 00:02:58.348 net/pfe: not in enabled drivers build config 00:02:58.348 net/qede: not in enabled drivers build config 00:02:58.348 net/ring: not in enabled drivers build config 00:02:58.348 net/sfc: not in enabled drivers build config 00:02:58.348 net/softnic: not in enabled drivers build config 00:02:58.348 net/tap: not in enabled drivers build config 00:02:58.348 net/thunderx: not in enabled drivers build config 00:02:58.348 net/txgbe: not in enabled drivers build config 00:02:58.348 net/vdev_netvsc: not in enabled drivers build config 00:02:58.348 net/vhost: not in enabled drivers build config 00:02:58.348 net/virtio: not in enabled drivers build config 00:02:58.348 net/vmxnet3: not in enabled drivers build config 00:02:58.348 raw/*: missing internal dependency, "rawdev" 00:02:58.348 crypto/armv8: not in enabled drivers build config 00:02:58.348 crypto/bcmfs: not in enabled drivers build config 00:02:58.348 crypto/caam_jr: not in enabled drivers build config 00:02:58.348 crypto/ccp: not in enabled drivers build config 00:02:58.348 crypto/cnxk: not in enabled drivers build config 00:02:58.348 crypto/dpaa_sec: not in enabled drivers build config 00:02:58.348 crypto/dpaa2_sec: not in enabled drivers build config 00:02:58.348 crypto/ipsec_mb: not in enabled drivers build config 00:02:58.348 crypto/mlx5: not in enabled drivers build config 00:02:58.348 crypto/mvsam: not in enabled drivers build config 00:02:58.348 crypto/nitrox: not in enabled drivers build config 00:02:58.348 crypto/null: not in enabled drivers build config 00:02:58.348 crypto/octeontx: not in enabled drivers build config 00:02:58.348 crypto/openssl: not in enabled drivers build config 00:02:58.348 crypto/scheduler: not in enabled drivers build config 00:02:58.348 crypto/uadk: not in enabled drivers build config 00:02:58.348 crypto/virtio: not in enabled drivers build config 00:02:58.348 compress/isal: not in enabled drivers build config 00:02:58.348 compress/mlx5: not in enabled drivers build config 00:02:58.348 compress/nitrox: not in enabled drivers build config 00:02:58.348 compress/octeontx: not in enabled drivers build config 00:02:58.348 compress/zlib: not in enabled drivers build config 00:02:58.348 regex/*: missing internal dependency, "regexdev" 00:02:58.348 ml/*: missing internal dependency, "mldev" 00:02:58.348 vdpa/ifc: not in enabled drivers build config 00:02:58.348 vdpa/mlx5: not in enabled drivers build config 00:02:58.348 vdpa/nfp: not in enabled drivers build config 00:02:58.348 vdpa/sfc: not in enabled drivers build config 00:02:58.348 event/*: missing internal dependency, "eventdev" 00:02:58.348 baseband/*: missing internal dependency, "bbdev" 00:02:58.348 gpu/*: missing internal dependency, "gpudev" 00:02:58.348 00:02:58.348 00:02:58.348 Build targets in project: 84 00:02:58.348 00:02:58.348 DPDK 24.03.0 00:02:58.348 00:02:58.348 User defined options 00:02:58.348 buildtype : debug 00:02:58.348 default_library : shared 00:02:58.348 libdir : lib 00:02:58.348 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:58.348 b_sanitize : address 00:02:58.348 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:58.348 c_link_args : 00:02:58.348 cpu_instruction_set: native 00:02:58.348 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:58.348 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:58.348 enable_docs : false 00:02:58.348 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:58.348 enable_kmods : false 00:02:58.348 max_lcores : 128 00:02:58.348 tests : false 00:02:58.348 00:02:58.349 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:58.606 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:58.606 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:58.606 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:58.606 [3/267] Linking static target lib/librte_kvargs.a 00:02:58.606 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:58.606 [5/267] Linking static target lib/librte_log.a 00:02:58.606 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:58.864 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:58.864 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:58.864 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:58.864 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.864 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:58.864 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:58.864 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:59.122 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.122 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:59.122 [16/267] Linking static target lib/librte_telemetry.a 00:02:59.122 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:59.122 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:59.380 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.380 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:59.380 [21/267] Linking target lib/librte_log.so.24.1 00:02:59.380 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:59.380 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:59.380 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:59.380 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:59.380 [26/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:59.380 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:59.637 [28/267] Linking target lib/librte_kvargs.so.24.1 00:02:59.637 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:59.637 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:59.637 [31/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:59.637 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:59.895 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:59.895 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:59.895 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.895 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:59.895 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:59.895 [38/267] Linking target lib/librte_telemetry.so.24.1 00:02:59.895 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:59.895 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:59.895 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:59.895 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:00.152 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.152 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:00.152 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:00.152 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:00.152 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.152 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.152 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.410 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.410 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:00.410 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.410 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.410 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:00.410 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.410 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.410 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.410 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.410 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.667 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:00.667 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:00.667 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:00.667 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:00.667 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:00.667 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.925 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.925 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.925 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:00.925 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.925 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.925 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:00.925 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.925 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.925 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:01.183 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:01.183 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:01.183 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:01.183 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:01.183 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:01.183 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:01.183 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:01.183 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:01.440 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:01.441 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:01.441 [85/267] Linking static target lib/librte_ring.a 00:03:01.441 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:01.441 [87/267] Linking static target lib/librte_eal.a 00:03:01.441 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:01.441 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:01.441 [90/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.441 [91/267] Linking static target lib/librte_rcu.a 00:03:01.441 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:01.699 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:01.699 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:01.699 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:01.699 [96/267] Linking static target lib/librte_mempool.a 00:03:01.699 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.699 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.957 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.957 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.957 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.957 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.957 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.958 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.958 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:01.958 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.216 [107/267] Linking static target lib/librte_net.a 00:03:02.216 [108/267] Linking static target lib/librte_meter.a 00:03:02.216 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.216 [110/267] Linking static target lib/librte_mbuf.a 00:03:02.216 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.216 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.474 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:02.474 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.474 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.474 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.474 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.732 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.732 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.990 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.990 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.990 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.990 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.990 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.990 [125/267] Linking static target lib/librte_pci.a 00:03:02.990 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.990 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.248 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.248 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.248 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.248 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:03.248 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.248 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:03.248 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.506 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.507 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.507 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.507 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.507 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.507 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.507 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.507 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.507 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.507 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.507 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.507 [146/267] Linking static target lib/librte_cmdline.a 00:03:03.764 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.765 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.765 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:03.765 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.023 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.023 [152/267] Linking static target lib/librte_ethdev.a 00:03:04.023 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.023 [154/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.023 [155/267] Linking static target lib/librte_timer.a 00:03:04.023 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.023 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.023 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.023 [159/267] Linking static target lib/librte_compressdev.a 00:03:04.281 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.281 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.281 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.539 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.539 [164/267] Linking static target lib/librte_dmadev.a 00:03:04.539 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.539 [166/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.539 [167/267] Linking static target lib/librte_hash.a 00:03:04.539 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.539 [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.539 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.539 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.539 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.808 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.808 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.808 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.808 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.067 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.067 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.067 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:05.067 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.067 [181/267] Linking static target lib/librte_power.a 00:03:05.067 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:05.325 [183/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.325 [184/267] Linking static target lib/librte_reorder.a 00:03:05.325 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.325 [186/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.325 [187/267] Linking static target lib/librte_cryptodev.a 00:03:05.583 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:05.583 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:05.583 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.583 [191/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.930 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.930 [193/267] Linking static target lib/librte_security.a 00:03:05.930 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.930 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:06.189 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:06.189 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.448 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.448 [199/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.448 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:06.448 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.448 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:06.448 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.706 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.706 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.706 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.706 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.706 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.706 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.965 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.965 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.965 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.965 [213/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.965 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.965 [215/267] Linking static target drivers/librte_bus_vdev.a 00:03:06.965 [216/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.965 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.965 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.965 [219/267] Linking static target drivers/librte_bus_pci.a 00:03:07.224 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.224 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.224 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.224 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.224 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:07.224 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.482 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.048 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.614 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.614 [229/267] Linking target lib/librte_eal.so.24.1 00:03:08.614 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:08.614 [231/267] Linking target lib/librte_pci.so.24.1 00:03:08.614 [232/267] Linking target lib/librte_ring.so.24.1 00:03:08.614 [233/267] Linking target lib/librte_meter.so.24.1 00:03:08.614 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:08.614 [235/267] Linking target lib/librte_timer.so.24.1 00:03:08.614 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:08.871 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:08.871 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:08.871 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:08.871 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:08.871 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:08.871 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:08.871 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:08.871 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:08.871 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:08.871 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:08.871 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:08.871 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:09.128 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:09.128 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:09.128 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:09.128 [252/267] Linking target lib/librte_net.so.24.1 00:03:09.128 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:09.129 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:09.129 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:09.129 [256/267] Linking target lib/librte_hash.so.24.1 00:03:09.129 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:09.129 [258/267] Linking target lib/librte_security.so.24.1 00:03:09.387 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:09.387 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.387 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:09.645 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:09.645 [263/267] Linking target lib/librte_power.so.24.1 00:03:11.020 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.020 [265/267] Linking static target lib/librte_vhost.a 00:03:11.956 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.214 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:12.214 INFO: autodetecting backend as ninja 00:03:12.214 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:24.412 CC lib/ut/ut.o 00:03:24.412 CC lib/log/log.o 00:03:24.412 CC lib/log/log_flags.o 00:03:24.412 CC lib/log/log_deprecated.o 00:03:24.412 CC lib/ut_mock/mock.o 00:03:24.412 LIB libspdk_log.a 00:03:24.412 LIB libspdk_ut.a 00:03:24.412 LIB libspdk_ut_mock.a 00:03:24.412 SO libspdk_ut.so.2.0 00:03:24.412 SO libspdk_log.so.7.0 00:03:24.412 SO libspdk_ut_mock.so.6.0 00:03:24.412 SYMLINK libspdk_ut.so 00:03:24.412 SYMLINK libspdk_ut_mock.so 00:03:24.412 SYMLINK libspdk_log.so 00:03:24.412 CC lib/util/base64.o 00:03:24.412 CC lib/ioat/ioat.o 00:03:24.412 CC lib/dma/dma.o 00:03:24.412 CC lib/util/bit_array.o 00:03:24.412 CXX lib/trace_parser/trace.o 00:03:24.412 CC lib/util/cpuset.o 00:03:24.412 CC lib/util/crc16.o 00:03:24.412 CC lib/util/crc32.o 00:03:24.412 CC lib/util/crc32c.o 00:03:24.412 CC lib/vfio_user/host/vfio_user_pci.o 00:03:24.412 CC lib/util/crc32_ieee.o 00:03:24.412 CC lib/util/crc64.o 00:03:24.670 CC lib/util/dif.o 00:03:24.670 LIB libspdk_dma.a 00:03:24.670 CC lib/util/fd.o 00:03:24.670 SO libspdk_dma.so.5.0 00:03:24.670 CC lib/util/fd_group.o 00:03:24.670 CC lib/util/file.o 00:03:24.670 CC lib/util/hexlify.o 00:03:24.670 CC lib/util/iov.o 00:03:24.670 SYMLINK libspdk_dma.so 00:03:24.670 CC lib/vfio_user/host/vfio_user.o 00:03:24.670 LIB libspdk_ioat.a 00:03:24.670 SO libspdk_ioat.so.7.0 00:03:24.670 CC lib/util/math.o 00:03:24.670 SYMLINK libspdk_ioat.so 00:03:24.670 CC lib/util/net.o 00:03:24.670 CC lib/util/pipe.o 00:03:24.670 CC lib/util/strerror_tls.o 00:03:24.670 CC lib/util/string.o 00:03:24.670 CC lib/util/uuid.o 00:03:24.670 CC lib/util/xor.o 00:03:24.670 LIB libspdk_vfio_user.a 00:03:24.926 SO libspdk_vfio_user.so.5.0 00:03:24.926 CC lib/util/zipf.o 00:03:24.926 CC lib/util/md5.o 00:03:24.926 SYMLINK libspdk_vfio_user.so 00:03:25.183 LIB libspdk_util.a 00:03:25.183 SO libspdk_util.so.10.0 00:03:25.183 LIB libspdk_trace_parser.a 00:03:25.183 SYMLINK libspdk_util.so 00:03:25.441 SO libspdk_trace_parser.so.6.0 00:03:25.441 SYMLINK libspdk_trace_parser.so 00:03:25.441 CC lib/idxd/idxd.o 00:03:25.441 CC lib/idxd/idxd_kernel.o 00:03:25.441 CC lib/idxd/idxd_user.o 00:03:25.441 CC lib/conf/conf.o 00:03:25.441 CC lib/rdma_utils/rdma_utils.o 00:03:25.441 CC lib/json/json_parse.o 00:03:25.441 CC lib/json/json_util.o 00:03:25.441 CC lib/vmd/vmd.o 00:03:25.441 CC lib/env_dpdk/env.o 00:03:25.441 CC lib/rdma_provider/common.o 00:03:25.441 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.697 LIB libspdk_conf.a 00:03:25.698 CC lib/vmd/led.o 00:03:25.698 CC lib/json/json_write.o 00:03:25.698 SO libspdk_conf.so.6.0 00:03:25.698 CC lib/env_dpdk/memory.o 00:03:25.698 LIB libspdk_rdma_utils.a 00:03:25.698 SYMLINK libspdk_conf.so 00:03:25.698 SO libspdk_rdma_utils.so.1.0 00:03:25.698 CC lib/env_dpdk/pci.o 00:03:25.698 CC lib/env_dpdk/init.o 00:03:25.698 SYMLINK libspdk_rdma_utils.so 00:03:25.698 CC lib/env_dpdk/threads.o 00:03:25.698 CC lib/env_dpdk/pci_ioat.o 00:03:25.698 LIB libspdk_rdma_provider.a 00:03:25.698 SO libspdk_rdma_provider.so.6.0 00:03:25.698 CC lib/env_dpdk/pci_virtio.o 00:03:25.698 LIB libspdk_json.a 00:03:25.698 SYMLINK libspdk_rdma_provider.so 00:03:25.698 CC lib/env_dpdk/pci_vmd.o 00:03:25.955 CC lib/env_dpdk/pci_idxd.o 00:03:25.955 SO libspdk_json.so.6.0 00:03:25.955 SYMLINK libspdk_json.so 00:03:25.955 CC lib/env_dpdk/pci_event.o 00:03:25.955 CC lib/env_dpdk/sigbus_handler.o 00:03:25.955 CC lib/env_dpdk/pci_dpdk.o 00:03:25.955 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.955 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.955 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.955 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.955 LIB libspdk_idxd.a 00:03:25.955 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.955 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.955 SO libspdk_idxd.so.12.1 00:03:25.955 LIB libspdk_vmd.a 00:03:26.213 SO libspdk_vmd.so.6.0 00:03:26.213 SYMLINK libspdk_idxd.so 00:03:26.213 SYMLINK libspdk_vmd.so 00:03:26.213 LIB libspdk_jsonrpc.a 00:03:26.213 SO libspdk_jsonrpc.so.6.0 00:03:26.213 SYMLINK libspdk_jsonrpc.so 00:03:26.471 CC lib/rpc/rpc.o 00:03:26.784 LIB libspdk_rpc.a 00:03:26.784 SO libspdk_rpc.so.6.0 00:03:26.784 LIB libspdk_env_dpdk.a 00:03:26.784 SYMLINK libspdk_rpc.so 00:03:26.784 SO libspdk_env_dpdk.so.15.0 00:03:27.042 SYMLINK libspdk_env_dpdk.so 00:03:27.042 CC lib/notify/notify.o 00:03:27.042 CC lib/trace/trace.o 00:03:27.042 CC lib/trace/trace_flags.o 00:03:27.042 CC lib/notify/notify_rpc.o 00:03:27.042 CC lib/trace/trace_rpc.o 00:03:27.042 CC lib/keyring/keyring_rpc.o 00:03:27.042 CC lib/keyring/keyring.o 00:03:27.042 LIB libspdk_notify.a 00:03:27.042 SO libspdk_notify.so.6.0 00:03:27.042 LIB libspdk_keyring.a 00:03:27.300 SYMLINK libspdk_notify.so 00:03:27.300 LIB libspdk_trace.a 00:03:27.300 SO libspdk_keyring.so.2.0 00:03:27.300 SO libspdk_trace.so.11.0 00:03:27.300 SYMLINK libspdk_keyring.so 00:03:27.300 SYMLINK libspdk_trace.so 00:03:27.558 CC lib/sock/sock.o 00:03:27.558 CC lib/sock/sock_rpc.o 00:03:27.558 CC lib/thread/thread.o 00:03:27.558 CC lib/thread/iobuf.o 00:03:27.816 LIB libspdk_sock.a 00:03:27.816 SO libspdk_sock.so.10.0 00:03:27.816 SYMLINK libspdk_sock.so 00:03:28.074 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.074 CC lib/nvme/nvme_ns_cmd.o 00:03:28.074 CC lib/nvme/nvme_fabric.o 00:03:28.074 CC lib/nvme/nvme_ctrlr.o 00:03:28.074 CC lib/nvme/nvme_pcie_common.o 00:03:28.074 CC lib/nvme/nvme_pcie.o 00:03:28.074 CC lib/nvme/nvme_ns.o 00:03:28.075 CC lib/nvme/nvme_qpair.o 00:03:28.075 CC lib/nvme/nvme.o 00:03:28.640 CC lib/nvme/nvme_quirks.o 00:03:28.640 CC lib/nvme/nvme_transport.o 00:03:28.640 CC lib/nvme/nvme_discovery.o 00:03:28.640 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.898 LIB libspdk_thread.a 00:03:28.898 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.898 SO libspdk_thread.so.10.1 00:03:28.898 CC lib/nvme/nvme_tcp.o 00:03:28.898 SYMLINK libspdk_thread.so 00:03:28.898 CC lib/nvme/nvme_opal.o 00:03:28.898 CC lib/nvme/nvme_io_msg.o 00:03:28.898 CC lib/accel/accel.o 00:03:28.898 CC lib/nvme/nvme_poll_group.o 00:03:29.157 CC lib/nvme/nvme_zns.o 00:03:29.157 CC lib/nvme/nvme_stubs.o 00:03:29.415 CC lib/nvme/nvme_auth.o 00:03:29.415 CC lib/nvme/nvme_cuse.o 00:03:29.415 CC lib/nvme/nvme_rdma.o 00:03:29.415 CC lib/accel/accel_rpc.o 00:03:29.673 CC lib/accel/accel_sw.o 00:03:29.673 CC lib/blob/blobstore.o 00:03:29.673 CC lib/init/json_config.o 00:03:29.673 CC lib/virtio/virtio.o 00:03:29.932 CC lib/init/subsystem.o 00:03:29.932 CC lib/init/subsystem_rpc.o 00:03:29.932 CC lib/virtio/virtio_vhost_user.o 00:03:29.932 CC lib/virtio/virtio_vfio_user.o 00:03:29.932 CC lib/virtio/virtio_pci.o 00:03:29.932 CC lib/init/rpc.o 00:03:30.190 LIB libspdk_accel.a 00:03:30.190 CC lib/fsdev/fsdev.o 00:03:30.190 SO libspdk_accel.so.16.0 00:03:30.190 CC lib/fsdev/fsdev_io.o 00:03:30.190 LIB libspdk_init.a 00:03:30.190 SYMLINK libspdk_accel.so 00:03:30.190 CC lib/fsdev/fsdev_rpc.o 00:03:30.190 CC lib/blob/request.o 00:03:30.190 SO libspdk_init.so.6.0 00:03:30.190 CC lib/blob/zeroes.o 00:03:30.190 LIB libspdk_virtio.a 00:03:30.190 SO libspdk_virtio.so.7.0 00:03:30.190 SYMLINK libspdk_init.so 00:03:30.190 SYMLINK libspdk_virtio.so 00:03:30.190 CC lib/blob/blob_bs_dev.o 00:03:30.448 CC lib/bdev/bdev.o 00:03:30.448 CC lib/bdev/bdev_rpc.o 00:03:30.448 CC lib/bdev/bdev_zone.o 00:03:30.448 CC lib/event/app.o 00:03:30.448 CC lib/bdev/part.o 00:03:30.448 CC lib/bdev/scsi_nvme.o 00:03:30.448 CC lib/event/reactor.o 00:03:30.448 CC lib/event/log_rpc.o 00:03:30.706 CC lib/event/app_rpc.o 00:03:30.706 LIB libspdk_nvme.a 00:03:30.706 CC lib/event/scheduler_static.o 00:03:30.706 LIB libspdk_fsdev.a 00:03:30.706 SO libspdk_fsdev.so.1.0 00:03:30.706 SYMLINK libspdk_fsdev.so 00:03:30.706 SO libspdk_nvme.so.14.0 00:03:30.964 LIB libspdk_event.a 00:03:30.964 SO libspdk_event.so.14.0 00:03:30.964 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:30.964 SYMLINK libspdk_nvme.so 00:03:30.964 SYMLINK libspdk_event.so 00:03:31.531 LIB libspdk_fuse_dispatcher.a 00:03:31.531 SO libspdk_fuse_dispatcher.so.1.0 00:03:31.788 SYMLINK libspdk_fuse_dispatcher.so 00:03:32.722 LIB libspdk_blob.a 00:03:32.722 SO libspdk_blob.so.11.0 00:03:32.981 SYMLINK libspdk_blob.so 00:03:32.981 LIB libspdk_bdev.a 00:03:32.981 CC lib/blobfs/blobfs.o 00:03:32.981 CC lib/blobfs/tree.o 00:03:32.981 CC lib/lvol/lvol.o 00:03:33.239 SO libspdk_bdev.so.16.0 00:03:33.239 SYMLINK libspdk_bdev.so 00:03:33.239 CC lib/ublk/ublk_rpc.o 00:03:33.239 CC lib/ublk/ublk.o 00:03:33.239 CC lib/ftl/ftl_core.o 00:03:33.239 CC lib/ftl/ftl_init.o 00:03:33.239 CC lib/ftl/ftl_layout.o 00:03:33.240 CC lib/scsi/dev.o 00:03:33.240 CC lib/nbd/nbd.o 00:03:33.240 CC lib/nvmf/ctrlr.o 00:03:33.497 CC lib/nbd/nbd_rpc.o 00:03:33.497 CC lib/scsi/lun.o 00:03:33.497 CC lib/scsi/port.o 00:03:33.497 CC lib/scsi/scsi.o 00:03:33.761 CC lib/scsi/scsi_bdev.o 00:03:33.761 CC lib/nvmf/ctrlr_discovery.o 00:03:33.761 CC lib/nvmf/ctrlr_bdev.o 00:03:33.761 CC lib/ftl/ftl_debug.o 00:03:33.761 LIB libspdk_blobfs.a 00:03:33.761 CC lib/scsi/scsi_pr.o 00:03:33.761 SO libspdk_blobfs.so.10.0 00:03:33.761 LIB libspdk_nbd.a 00:03:33.761 SO libspdk_nbd.so.7.0 00:03:33.761 SYMLINK libspdk_blobfs.so 00:03:33.761 CC lib/nvmf/subsystem.o 00:03:33.761 SYMLINK libspdk_nbd.so 00:03:33.761 CC lib/nvmf/nvmf.o 00:03:34.024 CC lib/ftl/ftl_io.o 00:03:34.024 CC lib/scsi/scsi_rpc.o 00:03:34.024 LIB libspdk_ublk.a 00:03:34.024 SO libspdk_ublk.so.3.0 00:03:34.024 LIB libspdk_lvol.a 00:03:34.024 SO libspdk_lvol.so.10.0 00:03:34.024 CC lib/scsi/task.o 00:03:34.024 SYMLINK libspdk_ublk.so 00:03:34.024 CC lib/ftl/ftl_sb.o 00:03:34.024 CC lib/ftl/ftl_l2p.o 00:03:34.024 SYMLINK libspdk_lvol.so 00:03:34.024 CC lib/nvmf/nvmf_rpc.o 00:03:34.024 CC lib/nvmf/transport.o 00:03:34.283 CC lib/ftl/ftl_l2p_flat.o 00:03:34.283 CC lib/nvmf/tcp.o 00:03:34.283 CC lib/ftl/ftl_nv_cache.o 00:03:34.283 LIB libspdk_scsi.a 00:03:34.283 CC lib/ftl/ftl_band.o 00:03:34.283 CC lib/nvmf/stubs.o 00:03:34.283 SO libspdk_scsi.so.9.0 00:03:34.283 SYMLINK libspdk_scsi.so 00:03:34.283 CC lib/nvmf/mdns_server.o 00:03:34.542 CC lib/nvmf/rdma.o 00:03:34.800 CC lib/nvmf/auth.o 00:03:34.800 CC lib/ftl/ftl_band_ops.o 00:03:34.800 CC lib/ftl/ftl_writer.o 00:03:34.800 CC lib/ftl/ftl_rq.o 00:03:34.800 CC lib/iscsi/conn.o 00:03:34.800 CC lib/vhost/vhost.o 00:03:35.059 CC lib/vhost/vhost_rpc.o 00:03:35.059 CC lib/vhost/vhost_scsi.o 00:03:35.059 CC lib/vhost/vhost_blk.o 00:03:35.059 CC lib/ftl/ftl_reloc.o 00:03:35.059 CC lib/iscsi/init_grp.o 00:03:35.318 CC lib/iscsi/iscsi.o 00:03:35.318 CC lib/iscsi/param.o 00:03:35.318 CC lib/ftl/ftl_l2p_cache.o 00:03:35.577 CC lib/ftl/ftl_p2l.o 00:03:35.577 CC lib/ftl/ftl_p2l_log.o 00:03:35.577 CC lib/vhost/rte_vhost_user.o 00:03:35.577 CC lib/iscsi/portal_grp.o 00:03:35.577 CC lib/iscsi/tgt_node.o 00:03:35.835 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.835 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.835 CC lib/iscsi/iscsi_subsystem.o 00:03:35.835 CC lib/iscsi/iscsi_rpc.o 00:03:35.835 CC lib/iscsi/task.o 00:03:35.835 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.093 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.093 CC lib/ftl/utils/ftl_conf.o 00:03:36.093 CC lib/ftl/utils/ftl_md.o 00:03:36.351 CC lib/ftl/utils/ftl_mempool.o 00:03:36.351 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.351 CC lib/ftl/utils/ftl_property.o 00:03:36.351 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.351 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.351 LIB libspdk_iscsi.a 00:03:36.351 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.610 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:36.610 SO libspdk_iscsi.so.8.0 00:03:36.610 LIB libspdk_vhost.a 00:03:36.610 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.610 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.610 SO libspdk_vhost.so.8.0 00:03:36.610 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.610 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.610 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:36.610 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:36.610 SYMLINK libspdk_iscsi.so 00:03:36.610 CC lib/ftl/base/ftl_base_dev.o 00:03:36.610 SYMLINK libspdk_vhost.so 00:03:36.610 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.610 CC lib/ftl/ftl_trace.o 00:03:36.868 LIB libspdk_nvmf.a 00:03:36.868 LIB libspdk_ftl.a 00:03:36.868 SO libspdk_nvmf.so.19.0 00:03:37.127 SO libspdk_ftl.so.9.0 00:03:37.127 SYMLINK libspdk_nvmf.so 00:03:37.384 SYMLINK libspdk_ftl.so 00:03:37.384 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.642 CC module/accel/iaa/accel_iaa.o 00:03:37.642 CC module/accel/error/accel_error.o 00:03:37.642 CC module/sock/posix/posix.o 00:03:37.642 CC module/fsdev/aio/fsdev_aio.o 00:03:37.642 CC module/accel/dsa/accel_dsa.o 00:03:37.642 CC module/blob/bdev/blob_bdev.o 00:03:37.642 CC module/accel/ioat/accel_ioat.o 00:03:37.642 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.642 CC module/keyring/file/keyring.o 00:03:37.642 LIB libspdk_env_dpdk_rpc.a 00:03:37.642 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.642 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.642 CC module/accel/error/accel_error_rpc.o 00:03:37.642 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.642 CC module/keyring/file/keyring_rpc.o 00:03:37.642 CC module/accel/iaa/accel_iaa_rpc.o 00:03:37.642 LIB libspdk_scheduler_dynamic.a 00:03:37.642 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.642 LIB libspdk_blob_bdev.a 00:03:37.642 LIB libspdk_accel_ioat.a 00:03:37.642 SO libspdk_blob_bdev.so.11.0 00:03:37.642 SYMLINK libspdk_scheduler_dynamic.so 00:03:37.900 LIB libspdk_keyring_file.a 00:03:37.900 SO libspdk_accel_ioat.so.6.0 00:03:37.900 LIB libspdk_accel_iaa.a 00:03:37.900 LIB libspdk_accel_error.a 00:03:37.900 CC module/accel/dsa/accel_dsa_rpc.o 00:03:37.900 SO libspdk_keyring_file.so.2.0 00:03:37.900 SYMLINK libspdk_blob_bdev.so 00:03:37.900 SO libspdk_accel_error.so.2.0 00:03:37.900 SO libspdk_accel_iaa.so.3.0 00:03:37.900 SYMLINK libspdk_accel_ioat.so 00:03:37.900 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.900 CC module/fsdev/aio/linux_aio_mgr.o 00:03:37.900 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.900 SYMLINK libspdk_accel_error.so 00:03:37.900 SYMLINK libspdk_keyring_file.so 00:03:37.900 CC module/keyring/linux/keyring.o 00:03:37.900 SYMLINK libspdk_accel_iaa.so 00:03:37.900 LIB libspdk_accel_dsa.a 00:03:37.900 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.900 CC module/keyring/linux/keyring_rpc.o 00:03:37.900 SO libspdk_accel_dsa.so.5.0 00:03:37.900 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.159 SYMLINK libspdk_accel_dsa.so 00:03:38.159 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.159 LIB libspdk_keyring_linux.a 00:03:38.159 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.159 SO libspdk_keyring_linux.so.1.0 00:03:38.159 CC module/bdev/delay/vbdev_delay.o 00:03:38.159 CC module/bdev/gpt/gpt.o 00:03:38.159 CC module/bdev/error/vbdev_error.o 00:03:38.159 CC module/blobfs/bdev/blobfs_bdev.o 00:03:38.159 LIB libspdk_sock_posix.a 00:03:38.159 SYMLINK libspdk_keyring_linux.so 00:03:38.159 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.159 SO libspdk_sock_posix.so.6.0 00:03:38.159 CC module/bdev/lvol/vbdev_lvol.o 00:03:38.159 LIB libspdk_fsdev_aio.a 00:03:38.159 LIB libspdk_scheduler_gscheduler.a 00:03:38.159 CC module/bdev/malloc/bdev_malloc.o 00:03:38.159 SO libspdk_fsdev_aio.so.1.0 00:03:38.159 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.159 SYMLINK libspdk_sock_posix.so 00:03:38.159 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.159 SYMLINK libspdk_fsdev_aio.so 00:03:38.159 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.159 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.159 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.159 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.159 LIB libspdk_blobfs_bdev.a 00:03:38.417 SO libspdk_blobfs_bdev.so.6.0 00:03:38.417 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.417 SYMLINK libspdk_blobfs_bdev.so 00:03:38.417 CC module/bdev/null/bdev_null.o 00:03:38.417 LIB libspdk_bdev_error.a 00:03:38.417 SO libspdk_bdev_error.so.6.0 00:03:38.417 LIB libspdk_bdev_delay.a 00:03:38.417 LIB libspdk_bdev_gpt.a 00:03:38.417 SO libspdk_bdev_delay.so.6.0 00:03:38.417 CC module/bdev/nvme/bdev_nvme.o 00:03:38.417 SYMLINK libspdk_bdev_error.so 00:03:38.417 SO libspdk_bdev_gpt.so.6.0 00:03:38.417 CC module/bdev/passthru/vbdev_passthru.o 00:03:38.417 CC module/bdev/raid/bdev_raid.o 00:03:38.417 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.417 CC module/bdev/null/bdev_null_rpc.o 00:03:38.417 SYMLINK libspdk_bdev_delay.so 00:03:38.417 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.417 SYMLINK libspdk_bdev_gpt.so 00:03:38.675 LIB libspdk_bdev_malloc.a 00:03:38.675 SO libspdk_bdev_malloc.so.6.0 00:03:38.675 LIB libspdk_bdev_lvol.a 00:03:38.675 SYMLINK libspdk_bdev_malloc.so 00:03:38.675 SO libspdk_bdev_lvol.so.6.0 00:03:38.675 CC module/bdev/split/vbdev_split.o 00:03:38.675 LIB libspdk_bdev_null.a 00:03:38.675 SYMLINK libspdk_bdev_lvol.so 00:03:38.675 CC module/bdev/split/vbdev_split_rpc.o 00:03:38.675 SO libspdk_bdev_null.so.6.0 00:03:38.675 LIB libspdk_bdev_passthru.a 00:03:38.675 SO libspdk_bdev_passthru.so.6.0 00:03:38.675 SYMLINK libspdk_bdev_null.so 00:03:38.675 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:38.934 CC module/bdev/xnvme/bdev_xnvme.o 00:03:38.934 SYMLINK libspdk_bdev_passthru.so 00:03:38.934 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:38.934 CC module/bdev/aio/bdev_aio.o 00:03:38.934 LIB libspdk_bdev_split.a 00:03:38.934 SO libspdk_bdev_split.so.6.0 00:03:38.934 CC module/bdev/ftl/bdev_ftl.o 00:03:38.934 SYMLINK libspdk_bdev_split.so 00:03:38.934 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.934 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.934 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.934 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.934 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:38.934 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.193 LIB libspdk_bdev_zone_block.a 00:03:39.193 SO libspdk_bdev_zone_block.so.6.0 00:03:39.193 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.193 LIB libspdk_bdev_aio.a 00:03:39.193 LIB libspdk_bdev_xnvme.a 00:03:39.193 SO libspdk_bdev_aio.so.6.0 00:03:39.193 SO libspdk_bdev_xnvme.so.3.0 00:03:39.193 SYMLINK libspdk_bdev_zone_block.so 00:03:39.193 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.193 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.193 SYMLINK libspdk_bdev_xnvme.so 00:03:39.193 LIB libspdk_bdev_iscsi.a 00:03:39.193 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.193 SYMLINK libspdk_bdev_aio.so 00:03:39.193 CC module/bdev/raid/raid0.o 00:03:39.193 SO libspdk_bdev_iscsi.so.6.0 00:03:39.193 CC module/bdev/raid/raid1.o 00:03:39.193 SYMLINK libspdk_bdev_iscsi.so 00:03:39.193 CC module/bdev/raid/concat.o 00:03:39.451 CC module/bdev/nvme/nvme_rpc.o 00:03:39.451 LIB libspdk_bdev_ftl.a 00:03:39.451 SO libspdk_bdev_ftl.so.6.0 00:03:39.451 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.451 SYMLINK libspdk_bdev_ftl.so 00:03:39.451 CC module/bdev/nvme/vbdev_opal.o 00:03:39.451 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.451 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.451 LIB libspdk_bdev_virtio.a 00:03:39.451 LIB libspdk_bdev_raid.a 00:03:39.451 SO libspdk_bdev_virtio.so.6.0 00:03:39.711 SO libspdk_bdev_raid.so.6.0 00:03:39.711 SYMLINK libspdk_bdev_virtio.so 00:03:39.711 SYMLINK libspdk_bdev_raid.so 00:03:40.649 LIB libspdk_bdev_nvme.a 00:03:40.649 SO libspdk_bdev_nvme.so.7.0 00:03:40.649 SYMLINK libspdk_bdev_nvme.so 00:03:41.216 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.216 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.216 CC module/event/subsystems/vmd/vmd.o 00:03:41.216 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.216 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.216 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.216 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.216 CC module/event/subsystems/keyring/keyring.o 00:03:41.216 CC module/event/subsystems/sock/sock.o 00:03:41.216 LIB libspdk_event_vhost_blk.a 00:03:41.216 LIB libspdk_event_sock.a 00:03:41.216 LIB libspdk_event_keyring.a 00:03:41.216 LIB libspdk_event_vmd.a 00:03:41.216 LIB libspdk_event_scheduler.a 00:03:41.216 SO libspdk_event_vhost_blk.so.3.0 00:03:41.216 LIB libspdk_event_fsdev.a 00:03:41.216 SO libspdk_event_sock.so.5.0 00:03:41.216 SO libspdk_event_scheduler.so.4.0 00:03:41.216 SO libspdk_event_keyring.so.1.0 00:03:41.216 LIB libspdk_event_iobuf.a 00:03:41.216 SO libspdk_event_vmd.so.6.0 00:03:41.216 SO libspdk_event_fsdev.so.1.0 00:03:41.216 SYMLINK libspdk_event_vhost_blk.so 00:03:41.216 SO libspdk_event_iobuf.so.3.0 00:03:41.216 SYMLINK libspdk_event_scheduler.so 00:03:41.216 SYMLINK libspdk_event_sock.so 00:03:41.216 SYMLINK libspdk_event_keyring.so 00:03:41.216 SYMLINK libspdk_event_vmd.so 00:03:41.216 SYMLINK libspdk_event_fsdev.so 00:03:41.216 SYMLINK libspdk_event_iobuf.so 00:03:41.474 CC module/event/subsystems/accel/accel.o 00:03:41.763 LIB libspdk_event_accel.a 00:03:41.763 SO libspdk_event_accel.so.6.0 00:03:41.763 SYMLINK libspdk_event_accel.so 00:03:42.026 CC module/event/subsystems/bdev/bdev.o 00:03:42.026 LIB libspdk_event_bdev.a 00:03:42.026 SO libspdk_event_bdev.so.6.0 00:03:42.026 SYMLINK libspdk_event_bdev.so 00:03:42.285 CC module/event/subsystems/scsi/scsi.o 00:03:42.285 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.285 CC module/event/subsystems/ublk/ublk.o 00:03:42.285 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.285 CC module/event/subsystems/nbd/nbd.o 00:03:42.285 LIB libspdk_event_scsi.a 00:03:42.285 LIB libspdk_event_nbd.a 00:03:42.285 LIB libspdk_event_ublk.a 00:03:42.543 SO libspdk_event_scsi.so.6.0 00:03:42.543 SO libspdk_event_nbd.so.6.0 00:03:42.543 SO libspdk_event_ublk.so.3.0 00:03:42.543 SYMLINK libspdk_event_scsi.so 00:03:42.543 SYMLINK libspdk_event_ublk.so 00:03:42.543 SYMLINK libspdk_event_nbd.so 00:03:42.543 LIB libspdk_event_nvmf.a 00:03:42.543 SO libspdk_event_nvmf.so.6.0 00:03:42.543 SYMLINK libspdk_event_nvmf.so 00:03:42.543 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.543 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.802 LIB libspdk_event_vhost_scsi.a 00:03:42.802 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.802 LIB libspdk_event_iscsi.a 00:03:42.802 SO libspdk_event_iscsi.so.6.0 00:03:42.802 SYMLINK libspdk_event_vhost_scsi.so 00:03:42.802 SYMLINK libspdk_event_iscsi.so 00:03:43.059 SO libspdk.so.6.0 00:03:43.059 SYMLINK libspdk.so 00:03:43.059 CXX app/trace/trace.o 00:03:43.059 CC app/spdk_nvme_identify/identify.o 00:03:43.059 CC app/spdk_lspci/spdk_lspci.o 00:03:43.059 CC app/trace_record/trace_record.o 00:03:43.059 CC app/spdk_nvme_perf/perf.o 00:03:43.317 CC app/nvmf_tgt/nvmf_main.o 00:03:43.317 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.317 CC app/spdk_tgt/spdk_tgt.o 00:03:43.317 CC examples/util/zipf/zipf.o 00:03:43.317 CC test/thread/poller_perf/poller_perf.o 00:03:43.317 LINK spdk_lspci 00:03:43.317 LINK nvmf_tgt 00:03:43.317 LINK zipf 00:03:43.317 LINK iscsi_tgt 00:03:43.317 LINK poller_perf 00:03:43.317 LINK spdk_trace_record 00:03:43.575 CC app/spdk_nvme_discover/discovery_aer.o 00:03:43.575 LINK spdk_tgt 00:03:43.575 LINK spdk_trace 00:03:43.575 CC app/spdk_top/spdk_top.o 00:03:43.575 CC examples/ioat/perf/perf.o 00:03:43.575 CC app/spdk_dd/spdk_dd.o 00:03:43.575 CC test/dma/test_dma/test_dma.o 00:03:43.575 LINK spdk_nvme_discover 00:03:43.575 CC app/fio/nvme/fio_plugin.o 00:03:43.833 CC app/fio/bdev/fio_plugin.o 00:03:43.833 LINK ioat_perf 00:03:43.834 CC app/vhost/vhost.o 00:03:43.834 LINK spdk_nvme_perf 00:03:43.834 LINK vhost 00:03:43.834 CC examples/ioat/verify/verify.o 00:03:44.091 LINK spdk_nvme_identify 00:03:44.091 CC test/app/bdev_svc/bdev_svc.o 00:03:44.091 LINK spdk_dd 00:03:44.091 LINK test_dma 00:03:44.091 LINK bdev_svc 00:03:44.091 CC test/app/histogram_perf/histogram_perf.o 00:03:44.091 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.091 CC test/app/jsoncat/jsoncat.o 00:03:44.091 LINK verify 00:03:44.350 LINK spdk_bdev 00:03:44.350 LINK spdk_nvme 00:03:44.350 CC test/app/stub/stub.o 00:03:44.350 LINK histogram_perf 00:03:44.350 LINK spdk_top 00:03:44.350 LINK jsoncat 00:03:44.350 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.350 TEST_HEADER include/spdk/accel.h 00:03:44.350 TEST_HEADER include/spdk/accel_module.h 00:03:44.350 LINK stub 00:03:44.350 TEST_HEADER include/spdk/assert.h 00:03:44.350 TEST_HEADER include/spdk/barrier.h 00:03:44.350 TEST_HEADER include/spdk/base64.h 00:03:44.350 TEST_HEADER include/spdk/bdev.h 00:03:44.350 TEST_HEADER include/spdk/bdev_module.h 00:03:44.350 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.350 TEST_HEADER include/spdk/bit_array.h 00:03:44.350 TEST_HEADER include/spdk/bit_pool.h 00:03:44.350 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.350 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.350 TEST_HEADER include/spdk/blobfs.h 00:03:44.350 TEST_HEADER include/spdk/blob.h 00:03:44.350 TEST_HEADER include/spdk/conf.h 00:03:44.350 CC examples/idxd/perf/perf.o 00:03:44.350 TEST_HEADER include/spdk/config.h 00:03:44.350 TEST_HEADER include/spdk/cpuset.h 00:03:44.350 TEST_HEADER include/spdk/crc16.h 00:03:44.350 TEST_HEADER include/spdk/crc32.h 00:03:44.350 TEST_HEADER include/spdk/crc64.h 00:03:44.350 TEST_HEADER include/spdk/dif.h 00:03:44.350 TEST_HEADER include/spdk/dma.h 00:03:44.350 TEST_HEADER include/spdk/endian.h 00:03:44.350 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.350 TEST_HEADER include/spdk/env.h 00:03:44.350 TEST_HEADER include/spdk/event.h 00:03:44.350 TEST_HEADER include/spdk/fd_group.h 00:03:44.350 TEST_HEADER include/spdk/fd.h 00:03:44.350 TEST_HEADER include/spdk/file.h 00:03:44.350 TEST_HEADER include/spdk/fsdev.h 00:03:44.350 TEST_HEADER include/spdk/fsdev_module.h 00:03:44.350 TEST_HEADER include/spdk/ftl.h 00:03:44.350 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:44.350 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.350 TEST_HEADER include/spdk/hexlify.h 00:03:44.350 TEST_HEADER include/spdk/histogram_data.h 00:03:44.350 TEST_HEADER include/spdk/idxd.h 00:03:44.350 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.350 TEST_HEADER include/spdk/init.h 00:03:44.350 TEST_HEADER include/spdk/ioat.h 00:03:44.350 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.350 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.350 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.350 TEST_HEADER include/spdk/json.h 00:03:44.350 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.350 TEST_HEADER include/spdk/keyring.h 00:03:44.350 TEST_HEADER include/spdk/keyring_module.h 00:03:44.350 LINK lsvmd 00:03:44.350 TEST_HEADER include/spdk/likely.h 00:03:44.350 TEST_HEADER include/spdk/log.h 00:03:44.350 TEST_HEADER include/spdk/lvol.h 00:03:44.350 TEST_HEADER include/spdk/md5.h 00:03:44.350 TEST_HEADER include/spdk/memory.h 00:03:44.350 TEST_HEADER include/spdk/mmio.h 00:03:44.609 TEST_HEADER include/spdk/nbd.h 00:03:44.609 TEST_HEADER include/spdk/net.h 00:03:44.609 TEST_HEADER include/spdk/notify.h 00:03:44.609 TEST_HEADER include/spdk/nvme.h 00:03:44.609 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.609 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.609 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.609 CC examples/vmd/led/led.o 00:03:44.609 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.609 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.609 LINK nvme_fuzz 00:03:44.609 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.609 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.609 TEST_HEADER include/spdk/nvmf.h 00:03:44.609 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.609 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.609 TEST_HEADER include/spdk/opal.h 00:03:44.609 TEST_HEADER include/spdk/opal_spec.h 00:03:44.609 TEST_HEADER include/spdk/pci_ids.h 00:03:44.609 TEST_HEADER include/spdk/pipe.h 00:03:44.609 TEST_HEADER include/spdk/queue.h 00:03:44.609 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.609 TEST_HEADER include/spdk/reduce.h 00:03:44.609 TEST_HEADER include/spdk/rpc.h 00:03:44.609 TEST_HEADER include/spdk/scheduler.h 00:03:44.609 TEST_HEADER include/spdk/scsi.h 00:03:44.609 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.609 TEST_HEADER include/spdk/sock.h 00:03:44.609 TEST_HEADER include/spdk/stdinc.h 00:03:44.609 TEST_HEADER include/spdk/string.h 00:03:44.609 TEST_HEADER include/spdk/thread.h 00:03:44.609 TEST_HEADER include/spdk/trace.h 00:03:44.609 CC examples/thread/thread/thread_ex.o 00:03:44.609 TEST_HEADER include/spdk/trace_parser.h 00:03:44.609 TEST_HEADER include/spdk/tree.h 00:03:44.609 TEST_HEADER include/spdk/ublk.h 00:03:44.609 TEST_HEADER include/spdk/util.h 00:03:44.609 TEST_HEADER include/spdk/uuid.h 00:03:44.609 TEST_HEADER include/spdk/version.h 00:03:44.609 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.609 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.609 TEST_HEADER include/spdk/vhost.h 00:03:44.609 TEST_HEADER include/spdk/vmd.h 00:03:44.609 TEST_HEADER include/spdk/xor.h 00:03:44.609 TEST_HEADER include/spdk/zipf.h 00:03:44.609 CC examples/sock/hello_world/hello_sock.o 00:03:44.609 CXX test/cpp_headers/accel.o 00:03:44.609 CC test/env/vtophys/vtophys.o 00:03:44.609 LINK interrupt_tgt 00:03:44.609 LINK led 00:03:44.609 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.609 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.609 LINK idxd_perf 00:03:44.609 CXX test/cpp_headers/accel_module.o 00:03:44.609 LINK thread 00:03:44.867 LINK vtophys 00:03:44.867 LINK hello_sock 00:03:44.867 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.867 LINK env_dpdk_post_init 00:03:44.867 CXX test/cpp_headers/assert.o 00:03:44.867 CC test/event/reactor/reactor.o 00:03:44.867 LINK mem_callbacks 00:03:44.867 CC test/event/event_perf/event_perf.o 00:03:44.867 CC test/event/reactor_perf/reactor_perf.o 00:03:44.867 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.867 CC test/event/app_repeat/app_repeat.o 00:03:44.867 CXX test/cpp_headers/barrier.o 00:03:45.124 LINK reactor 00:03:45.124 LINK reactor_perf 00:03:45.124 CC examples/nvme/reconnect/reconnect.o 00:03:45.124 CC examples/nvme/hello_world/hello_world.o 00:03:45.124 LINK event_perf 00:03:45.124 CC test/env/memory/memory_ut.o 00:03:45.124 LINK app_repeat 00:03:45.124 CXX test/cpp_headers/base64.o 00:03:45.124 CXX test/cpp_headers/bdev.o 00:03:45.124 LINK hello_world 00:03:45.124 CC test/event/scheduler/scheduler.o 00:03:45.124 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.382 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.382 CXX test/cpp_headers/bdev_module.o 00:03:45.382 LINK reconnect 00:03:45.382 LINK vhost_fuzz 00:03:45.382 CC test/rpc_client/rpc_client_test.o 00:03:45.382 CC test/nvme/aer/aer.o 00:03:45.382 CXX test/cpp_headers/bdev_zone.o 00:03:45.382 CXX test/cpp_headers/bit_array.o 00:03:45.382 LINK scheduler 00:03:45.382 LINK rpc_client_test 00:03:45.639 LINK hello_fsdev 00:03:45.639 CXX test/cpp_headers/bit_pool.o 00:03:45.639 CC test/accel/dif/dif.o 00:03:45.639 LINK aer 00:03:45.639 LINK nvme_manage 00:03:45.639 CXX test/cpp_headers/blob_bdev.o 00:03:45.639 CC test/blobfs/mkfs/mkfs.o 00:03:45.639 CC examples/accel/perf/accel_perf.o 00:03:45.639 CC test/lvol/esnap/esnap.o 00:03:45.639 CC examples/nvme/arbitration/arbitration.o 00:03:45.896 CC test/nvme/reset/reset.o 00:03:45.896 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.896 LINK mkfs 00:03:45.896 LINK memory_ut 00:03:45.896 CC examples/blob/hello_world/hello_blob.o 00:03:45.896 CXX test/cpp_headers/blobfs.o 00:03:45.896 LINK reset 00:03:45.896 CXX test/cpp_headers/blob.o 00:03:46.154 LINK arbitration 00:03:46.154 LINK dif 00:03:46.154 CC test/env/pci/pci_ut.o 00:03:46.154 CC test/nvme/e2edp/nvme_dp.o 00:03:46.154 LINK hello_blob 00:03:46.154 CXX test/cpp_headers/conf.o 00:03:46.154 CC test/nvme/sgl/sgl.o 00:03:46.154 LINK accel_perf 00:03:46.154 CXX test/cpp_headers/config.o 00:03:46.154 CC examples/nvme/hotplug/hotplug.o 00:03:46.154 CXX test/cpp_headers/cpuset.o 00:03:46.411 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.411 LINK sgl 00:03:46.411 LINK nvme_dp 00:03:46.411 CC examples/blob/cli/blobcli.o 00:03:46.411 CXX test/cpp_headers/crc16.o 00:03:46.411 LINK iscsi_fuzz 00:03:46.411 LINK pci_ut 00:03:46.411 CC examples/nvme/abort/abort.o 00:03:46.411 LINK cmb_copy 00:03:46.411 LINK hotplug 00:03:46.411 CC test/nvme/overhead/overhead.o 00:03:46.411 CXX test/cpp_headers/crc32.o 00:03:46.669 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.669 CC test/nvme/err_injection/err_injection.o 00:03:46.669 CC test/nvme/startup/startup.o 00:03:46.669 CXX test/cpp_headers/crc64.o 00:03:46.669 CC test/nvme/reserve/reserve.o 00:03:46.669 CC test/nvme/simple_copy/simple_copy.o 00:03:46.669 LINK err_injection 00:03:46.669 CXX test/cpp_headers/dif.o 00:03:46.669 LINK hello_bdev 00:03:46.669 LINK startup 00:03:46.669 LINK abort 00:03:46.669 LINK overhead 00:03:46.927 LINK blobcli 00:03:46.927 LINK reserve 00:03:46.927 CXX test/cpp_headers/dma.o 00:03:46.927 LINK simple_copy 00:03:46.927 CXX test/cpp_headers/endian.o 00:03:46.927 CXX test/cpp_headers/env_dpdk.o 00:03:46.927 CC test/nvme/connect_stress/connect_stress.o 00:03:46.927 CXX test/cpp_headers/env.o 00:03:46.927 CXX test/cpp_headers/event.o 00:03:46.927 CC examples/bdev/bdevperf/bdevperf.o 00:03:46.927 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:47.185 CXX test/cpp_headers/fd_group.o 00:03:47.185 CXX test/cpp_headers/fd.o 00:03:47.185 CC test/nvme/boot_partition/boot_partition.o 00:03:47.185 CXX test/cpp_headers/file.o 00:03:47.185 LINK connect_stress 00:03:47.185 CXX test/cpp_headers/fsdev.o 00:03:47.185 LINK pmr_persistence 00:03:47.185 CXX test/cpp_headers/fsdev_module.o 00:03:47.185 LINK boot_partition 00:03:47.185 CXX test/cpp_headers/ftl.o 00:03:47.185 CXX test/cpp_headers/fuse_dispatcher.o 00:03:47.185 CC test/bdev/bdevio/bdevio.o 00:03:47.185 CXX test/cpp_headers/gpt_spec.o 00:03:47.185 CXX test/cpp_headers/hexlify.o 00:03:47.443 CC test/nvme/compliance/nvme_compliance.o 00:03:47.443 CXX test/cpp_headers/histogram_data.o 00:03:47.443 CXX test/cpp_headers/idxd.o 00:03:47.443 CXX test/cpp_headers/idxd_spec.o 00:03:47.443 CC test/nvme/fused_ordering/fused_ordering.o 00:03:47.443 CXX test/cpp_headers/init.o 00:03:47.443 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:47.443 CXX test/cpp_headers/ioat.o 00:03:47.443 CXX test/cpp_headers/ioat_spec.o 00:03:47.443 CXX test/cpp_headers/iscsi_spec.o 00:03:47.700 LINK fused_ordering 00:03:47.700 CXX test/cpp_headers/json.o 00:03:47.700 LINK bdevperf 00:03:47.700 LINK doorbell_aers 00:03:47.700 LINK bdevio 00:03:47.700 CXX test/cpp_headers/jsonrpc.o 00:03:47.700 LINK nvme_compliance 00:03:47.700 CXX test/cpp_headers/keyring.o 00:03:47.700 CXX test/cpp_headers/keyring_module.o 00:03:47.700 CC test/nvme/fdp/fdp.o 00:03:47.700 CXX test/cpp_headers/likely.o 00:03:47.700 CXX test/cpp_headers/log.o 00:03:47.700 CC test/nvme/cuse/cuse.o 00:03:47.700 CXX test/cpp_headers/lvol.o 00:03:47.700 CXX test/cpp_headers/md5.o 00:03:47.700 CXX test/cpp_headers/memory.o 00:03:47.958 CXX test/cpp_headers/mmio.o 00:03:47.958 CXX test/cpp_headers/nbd.o 00:03:47.958 CXX test/cpp_headers/net.o 00:03:47.958 CXX test/cpp_headers/notify.o 00:03:47.958 CXX test/cpp_headers/nvme.o 00:03:47.958 CC examples/nvmf/nvmf/nvmf.o 00:03:47.958 CXX test/cpp_headers/nvme_intel.o 00:03:47.958 CXX test/cpp_headers/nvme_ocssd.o 00:03:47.958 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:47.958 CXX test/cpp_headers/nvme_spec.o 00:03:47.958 CXX test/cpp_headers/nvme_zns.o 00:03:47.958 LINK fdp 00:03:47.958 CXX test/cpp_headers/nvmf_cmd.o 00:03:48.217 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:48.217 CXX test/cpp_headers/nvmf.o 00:03:48.217 CXX test/cpp_headers/nvmf_spec.o 00:03:48.217 CXX test/cpp_headers/nvmf_transport.o 00:03:48.217 CXX test/cpp_headers/opal.o 00:03:48.217 CXX test/cpp_headers/opal_spec.o 00:03:48.217 CXX test/cpp_headers/pci_ids.o 00:03:48.217 CXX test/cpp_headers/pipe.o 00:03:48.217 LINK nvmf 00:03:48.217 CXX test/cpp_headers/queue.o 00:03:48.217 CXX test/cpp_headers/reduce.o 00:03:48.217 CXX test/cpp_headers/rpc.o 00:03:48.217 CXX test/cpp_headers/scheduler.o 00:03:48.217 CXX test/cpp_headers/scsi.o 00:03:48.217 CXX test/cpp_headers/scsi_spec.o 00:03:48.475 CXX test/cpp_headers/sock.o 00:03:48.475 CXX test/cpp_headers/stdinc.o 00:03:48.475 CXX test/cpp_headers/string.o 00:03:48.475 CXX test/cpp_headers/thread.o 00:03:48.475 CXX test/cpp_headers/trace.o 00:03:48.475 CXX test/cpp_headers/trace_parser.o 00:03:48.475 CXX test/cpp_headers/tree.o 00:03:48.475 CXX test/cpp_headers/ublk.o 00:03:48.475 CXX test/cpp_headers/util.o 00:03:48.475 CXX test/cpp_headers/uuid.o 00:03:48.475 CXX test/cpp_headers/version.o 00:03:48.475 CXX test/cpp_headers/vfio_user_pci.o 00:03:48.475 CXX test/cpp_headers/vfio_user_spec.o 00:03:48.475 CXX test/cpp_headers/vhost.o 00:03:48.475 CXX test/cpp_headers/vmd.o 00:03:48.475 CXX test/cpp_headers/xor.o 00:03:48.733 CXX test/cpp_headers/zipf.o 00:03:48.733 LINK cuse 00:03:50.703 LINK esnap 00:03:50.703 00:03:50.703 real 1m3.646s 00:03:50.703 user 6m2.672s 00:03:50.703 sys 1m2.433s 00:03:50.703 19:47:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:50.703 ************************************ 00:03:50.703 END TEST make 00:03:50.703 ************************************ 00:03:50.703 19:47:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:50.703 19:47:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:50.703 19:47:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:50.703 19:47:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:50.703 19:47:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.703 19:47:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:50.703 19:47:35 -- pm/common@44 -- $ pid=5058 00:03:50.703 19:47:35 -- pm/common@50 -- $ kill -TERM 5058 00:03:50.703 19:47:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.703 19:47:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:50.703 19:47:35 -- pm/common@44 -- $ pid=5059 00:03:50.703 19:47:35 -- pm/common@50 -- $ kill -TERM 5059 00:03:50.962 19:47:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:50.962 19:47:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:50.962 19:47:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:50.962 19:47:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:50.962 19:47:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.962 19:47:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.962 19:47:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.962 19:47:35 -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.962 19:47:35 -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.962 19:47:35 -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.962 19:47:35 -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.962 19:47:35 -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.962 19:47:35 -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.962 19:47:35 -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.962 19:47:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.962 19:47:35 -- scripts/common.sh@344 -- # case "$op" in 00:03:50.962 19:47:35 -- scripts/common.sh@345 -- # : 1 00:03:50.962 19:47:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.962 19:47:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.962 19:47:35 -- scripts/common.sh@365 -- # decimal 1 00:03:50.962 19:47:35 -- scripts/common.sh@353 -- # local d=1 00:03:50.962 19:47:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.962 19:47:35 -- scripts/common.sh@355 -- # echo 1 00:03:50.962 19:47:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.962 19:47:35 -- scripts/common.sh@366 -- # decimal 2 00:03:50.962 19:47:35 -- scripts/common.sh@353 -- # local d=2 00:03:50.962 19:47:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.962 19:47:35 -- scripts/common.sh@355 -- # echo 2 00:03:50.962 19:47:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.962 19:47:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.962 19:47:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.962 19:47:35 -- scripts/common.sh@368 -- # return 0 00:03:50.962 19:47:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.962 19:47:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:50.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.962 --rc genhtml_branch_coverage=1 00:03:50.962 --rc genhtml_function_coverage=1 00:03:50.962 --rc genhtml_legend=1 00:03:50.962 --rc geninfo_all_blocks=1 00:03:50.962 --rc geninfo_unexecuted_blocks=1 00:03:50.962 00:03:50.962 ' 00:03:50.962 19:47:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:50.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.962 --rc genhtml_branch_coverage=1 00:03:50.962 --rc genhtml_function_coverage=1 00:03:50.962 --rc genhtml_legend=1 00:03:50.962 --rc geninfo_all_blocks=1 00:03:50.962 --rc geninfo_unexecuted_blocks=1 00:03:50.962 00:03:50.962 ' 00:03:50.962 19:47:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:50.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.962 --rc genhtml_branch_coverage=1 00:03:50.962 --rc genhtml_function_coverage=1 00:03:50.962 --rc genhtml_legend=1 00:03:50.962 --rc geninfo_all_blocks=1 00:03:50.962 --rc geninfo_unexecuted_blocks=1 00:03:50.962 00:03:50.962 ' 00:03:50.962 19:47:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:50.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.962 --rc genhtml_branch_coverage=1 00:03:50.962 --rc genhtml_function_coverage=1 00:03:50.962 --rc genhtml_legend=1 00:03:50.962 --rc geninfo_all_blocks=1 00:03:50.962 --rc geninfo_unexecuted_blocks=1 00:03:50.962 00:03:50.962 ' 00:03:50.962 19:47:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:50.962 19:47:35 -- nvmf/common.sh@7 -- # uname -s 00:03:50.962 19:47:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:50.962 19:47:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:50.962 19:47:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:50.962 19:47:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:50.962 19:47:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:50.962 19:47:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:50.962 19:47:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:50.962 19:47:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:50.962 19:47:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:50.962 19:47:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:50.962 19:47:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bef6a72b-d837-4d7e-b594-b92515d61423 00:03:50.963 19:47:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=bef6a72b-d837-4d7e-b594-b92515d61423 00:03:50.963 19:47:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:50.963 19:47:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:50.963 19:47:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:50.963 19:47:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:50.963 19:47:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:50.963 19:47:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:50.963 19:47:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:50.963 19:47:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:50.963 19:47:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:50.963 19:47:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.963 19:47:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.963 19:47:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.963 19:47:35 -- paths/export.sh@5 -- # export PATH 00:03:50.963 19:47:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:50.963 19:47:35 -- nvmf/common.sh@51 -- # : 0 00:03:50.963 19:47:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:50.963 19:47:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:50.963 19:47:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:50.963 19:47:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:50.963 19:47:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:50.963 19:47:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:50.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:50.963 19:47:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:50.963 19:47:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:50.963 19:47:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:50.963 19:47:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.963 19:47:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.963 19:47:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:50.963 19:47:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:50.963 19:47:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:50.963 19:47:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:50.963 19:47:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:50.963 19:47:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:50.963 19:47:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:50.963 19:47:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:50.963 19:47:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54581 00:03:50.963 19:47:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:50.963 19:47:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:50.963 19:47:35 -- pm/common@17 -- # local monitor 00:03:50.963 19:47:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.963 19:47:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.963 19:47:35 -- pm/common@25 -- # sleep 1 00:03:50.963 19:47:35 -- pm/common@21 -- # date +%s 00:03:50.963 19:47:35 -- pm/common@21 -- # date +%s 00:03:50.963 19:47:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727725655 00:03:50.963 19:47:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727725655 00:03:50.963 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727725655_collect-cpu-load.pm.log 00:03:50.963 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727725655_collect-vmstat.pm.log 00:03:51.900 19:47:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.900 19:47:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.900 19:47:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.900 19:47:36 -- common/autotest_common.sh@10 -- # set +x 00:03:51.900 19:47:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.900 19:47:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:51.900 19:47:36 -- common/autotest_common.sh@10 -- # set +x 00:03:52.161 19:47:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:52.161 19:47:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:52.161 19:47:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:52.161 19:47:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:52.161 19:47:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:52.161 19:47:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.161 19:47:36 -- common/autotest_common.sh@1455 -- # uname 00:03:52.161 19:47:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.161 19:47:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.161 19:47:36 -- common/autotest_common.sh@1475 -- # uname 00:03:52.161 19:47:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.161 19:47:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:52.161 19:47:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:52.161 lcov: LCOV version 1.15 00:03:52.161 19:47:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:07.046 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.324 19:48:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.324 19:48:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.324 19:48:03 -- common/autotest_common.sh@10 -- # set +x 00:04:19.324 19:48:03 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.324 19:48:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.157 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:20.157 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:20.157 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:20.157 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:20.157 19:48:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:20.157 19:48:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:20.157 19:48:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:20.157 19:48:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:20.157 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.157 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.157 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.157 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:20.157 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.157 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:20.157 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:20.157 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.157 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.157 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:20.157 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:20.158 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:20.158 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.158 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.158 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:20.158 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:20.158 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:20.158 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.158 19:48:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:20.158 19:48:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:20.158 19:48:04 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:20.158 19:48:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:20.158 19:48:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:20.158 19:48:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:20.158 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.158 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.158 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:20.158 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:20.158 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.158 No valid GPT data, bailing 00:04:20.158 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.158 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.158 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.158 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.158 1+0 records in 00:04:20.158 1+0 records out 00:04:20.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00944241 s, 111 MB/s 00:04:20.158 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.158 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.158 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:20.158 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:20.158 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:20.158 No valid GPT data, bailing 00:04:20.158 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:20.158 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.158 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.158 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:20.158 1+0 records in 00:04:20.158 1+0 records out 00:04:20.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423126 s, 248 MB/s 00:04:20.158 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.158 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.158 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:20.158 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:20.158 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:20.419 No valid GPT data, bailing 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.419 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.419 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:20.419 1+0 records in 00:04:20.419 1+0 records out 00:04:20.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460243 s, 228 MB/s 00:04:20.419 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.419 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.419 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:20.419 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:20.419 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:20.419 No valid GPT data, bailing 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.419 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.419 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:20.419 1+0 records in 00:04:20.419 1+0 records out 00:04:20.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365516 s, 287 MB/s 00:04:20.419 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.419 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.419 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:20.419 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:20.419 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:20.419 No valid GPT data, bailing 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.419 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.419 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:20.419 1+0 records in 00:04:20.419 1+0 records out 00:04:20.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523078 s, 200 MB/s 00:04:20.419 19:48:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.419 19:48:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.419 19:48:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:20.419 19:48:04 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:20.419 19:48:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:20.419 No valid GPT data, bailing 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:20.419 19:48:04 -- scripts/common.sh@394 -- # pt= 00:04:20.419 19:48:04 -- scripts/common.sh@395 -- # return 1 00:04:20.419 19:48:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:20.419 1+0 records in 00:04:20.419 1+0 records out 00:04:20.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396633 s, 264 MB/s 00:04:20.419 19:48:04 -- spdk/autotest.sh@105 -- # sync 00:04:20.681 19:48:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.681 19:48:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.681 19:48:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:22.597 19:48:06 -- spdk/autotest.sh@111 -- # uname -s 00:04:22.597 19:48:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:22.597 19:48:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:22.597 19:48:06 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.168 Hugepages 00:04:23.168 node hugesize free / total 00:04:23.168 node0 1048576kB 0 / 0 00:04:23.168 node0 2048kB 0 / 0 00:04:23.168 00:04:23.168 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.168 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.429 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:23.429 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:23.429 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:23.429 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:23.429 19:48:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:23.429 19:48:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:23.429 19:48:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:23.429 19:48:07 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.575 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.575 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.575 19:48:08 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:25.956 19:48:09 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:25.956 19:48:09 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:25.956 19:48:09 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.956 19:48:09 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:25.956 19:48:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:25.956 19:48:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:25.956 19:48:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.956 19:48:09 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.956 19:48:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:25.956 19:48:09 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:25.956 19:48:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:25.956 19:48:09 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.217 Waiting for block devices as requested 00:04:26.217 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.478 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.478 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.478 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.767 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:31.767 19:48:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.767 19:48:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1541 -- # continue 00:04:31.767 19:48:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.767 19:48:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1541 -- # continue 00:04:31.767 19:48:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.767 19:48:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1541 -- # continue 00:04:31.767 19:48:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:31.767 19:48:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.767 19:48:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.767 19:48:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.767 19:48:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.767 19:48:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.767 19:48:15 -- common/autotest_common.sh@1541 -- # continue 00:04:31.767 19:48:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:31.767 19:48:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.767 19:48:15 -- common/autotest_common.sh@10 -- # set +x 00:04:31.767 19:48:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:31.767 19:48:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.767 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:04:31.767 19:48:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.910 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.910 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.910 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.910 19:48:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.910 19:48:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.910 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.910 19:48:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.910 19:48:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:32.910 19:48:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.910 19:48:17 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:32.910 19:48:17 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:32.910 19:48:17 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:32.910 19:48:17 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.910 19:48:17 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:32.910 19:48:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:32.910 19:48:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:32.910 19:48:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.910 19:48:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.910 19:48:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:32.910 19:48:17 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:32.910 19:48:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:32.910 19:48:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.910 19:48:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.910 19:48:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.910 19:48:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.910 19:48:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:32.910 19:48:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.910 19:48:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.171 19:48:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:33.171 19:48:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:33.171 19:48:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:33.171 19:48:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:33.171 19:48:17 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:33.171 19:48:17 -- common/autotest_common.sh@1570 -- # return 0 00:04:33.171 19:48:17 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:33.171 19:48:17 -- common/autotest_common.sh@1578 -- # return 0 00:04:33.171 19:48:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:33.171 19:48:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:33.171 19:48:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.171 19:48:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:33.171 19:48:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:33.171 19:48:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.171 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.171 19:48:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:33.171 19:48:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.171 19:48:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.171 19:48:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.171 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.171 ************************************ 00:04:33.171 START TEST env 00:04:33.171 ************************************ 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.171 * Looking for test storage... 00:04:33.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.171 19:48:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.171 19:48:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.171 19:48:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.171 19:48:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.171 19:48:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.171 19:48:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.171 19:48:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.171 19:48:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.171 19:48:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.171 19:48:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.171 19:48:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.171 19:48:17 env -- scripts/common.sh@344 -- # case "$op" in 00:04:33.171 19:48:17 env -- scripts/common.sh@345 -- # : 1 00:04:33.171 19:48:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.171 19:48:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.171 19:48:17 env -- scripts/common.sh@365 -- # decimal 1 00:04:33.171 19:48:17 env -- scripts/common.sh@353 -- # local d=1 00:04:33.171 19:48:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.171 19:48:17 env -- scripts/common.sh@355 -- # echo 1 00:04:33.171 19:48:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.171 19:48:17 env -- scripts/common.sh@366 -- # decimal 2 00:04:33.171 19:48:17 env -- scripts/common.sh@353 -- # local d=2 00:04:33.171 19:48:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.171 19:48:17 env -- scripts/common.sh@355 -- # echo 2 00:04:33.171 19:48:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.171 19:48:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.171 19:48:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.171 19:48:17 env -- scripts/common.sh@368 -- # return 0 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.171 --rc genhtml_branch_coverage=1 00:04:33.171 --rc genhtml_function_coverage=1 00:04:33.171 --rc genhtml_legend=1 00:04:33.171 --rc geninfo_all_blocks=1 00:04:33.171 --rc geninfo_unexecuted_blocks=1 00:04:33.171 00:04:33.171 ' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.171 --rc genhtml_branch_coverage=1 00:04:33.171 --rc genhtml_function_coverage=1 00:04:33.171 --rc genhtml_legend=1 00:04:33.171 --rc geninfo_all_blocks=1 00:04:33.171 --rc geninfo_unexecuted_blocks=1 00:04:33.171 00:04:33.171 ' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.171 --rc genhtml_branch_coverage=1 00:04:33.171 --rc genhtml_function_coverage=1 00:04:33.171 --rc genhtml_legend=1 00:04:33.171 --rc geninfo_all_blocks=1 00:04:33.171 --rc geninfo_unexecuted_blocks=1 00:04:33.171 00:04:33.171 ' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.171 --rc genhtml_branch_coverage=1 00:04:33.171 --rc genhtml_function_coverage=1 00:04:33.171 --rc genhtml_legend=1 00:04:33.171 --rc geninfo_all_blocks=1 00:04:33.171 --rc geninfo_unexecuted_blocks=1 00:04:33.171 00:04:33.171 ' 00:04:33.171 19:48:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.171 19:48:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.171 19:48:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.171 ************************************ 00:04:33.171 START TEST env_memory 00:04:33.171 ************************************ 00:04:33.171 19:48:17 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.171 00:04:33.171 00:04:33.171 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.171 http://cunit.sourceforge.net/ 00:04:33.171 00:04:33.171 00:04:33.171 Suite: memory 00:04:33.171 Test: alloc and free memory map ...[2024-09-30 19:48:17.533616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.432 passed 00:04:33.433 Test: mem map translation ...[2024-09-30 19:48:17.572432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.433 [2024-09-30 19:48:17.572483] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.433 [2024-09-30 19:48:17.572543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.433 [2024-09-30 19:48:17.572559] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.433 passed 00:04:33.433 Test: mem map registration ...[2024-09-30 19:48:17.640773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:33.433 [2024-09-30 19:48:17.640816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:33.433 passed 00:04:33.433 Test: mem map adjacent registrations ...passed 00:04:33.433 00:04:33.433 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.433 suites 1 1 n/a 0 0 00:04:33.433 tests 4 4 4 0 0 00:04:33.433 asserts 152 152 152 0 n/a 00:04:33.433 00:04:33.433 Elapsed time = 0.233 seconds 00:04:33.433 00:04:33.433 real 0m0.270s 00:04:33.433 user 0m0.238s 00:04:33.433 sys 0m0.024s 00:04:33.433 19:48:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.433 ************************************ 00:04:33.433 END TEST env_memory 00:04:33.433 19:48:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.433 ************************************ 00:04:33.693 19:48:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.693 19:48:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.693 19:48:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.693 19:48:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.693 ************************************ 00:04:33.693 START TEST env_vtophys 00:04:33.693 ************************************ 00:04:33.693 19:48:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.693 EAL: lib.eal log level changed from notice to debug 00:04:33.693 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 1 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 2 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 3 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 4 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 5 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 6 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 7 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 8 as core 0 on socket 0 00:04:33.693 EAL: Detected lcore 9 as core 0 on socket 0 00:04:33.693 EAL: Maximum logical cores by configuration: 128 00:04:33.693 EAL: Detected CPU lcores: 10 00:04:33.693 EAL: Detected NUMA nodes: 1 00:04:33.693 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.693 EAL: Detected shared linkage of DPDK 00:04:33.693 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.693 EAL: Selected IOVA mode 'PA' 00:04:33.693 EAL: Probing VFIO support... 00:04:33.693 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.693 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:33.693 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.693 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.693 EAL: Setting up physically contiguous memory... 00:04:33.693 EAL: Setting maximum number of open files to 524288 00:04:33.693 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.693 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.693 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.693 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.693 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.693 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.693 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.693 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.693 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.693 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.693 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.693 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.693 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.693 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.693 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.693 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.693 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.693 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.693 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.693 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.693 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.693 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.693 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.693 EAL: Hugepages will be freed exactly as allocated. 00:04:33.693 EAL: No shared files mode enabled, IPC is disabled 00:04:33.693 EAL: No shared files mode enabled, IPC is disabled 00:04:33.693 EAL: TSC frequency is ~2600000 KHz 00:04:33.693 EAL: Main lcore 0 is ready (tid=7f4cdd7cea40;cpuset=[0]) 00:04:33.693 EAL: Trying to obtain current memory policy. 00:04:33.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.693 EAL: Restoring previous memory policy: 0 00:04:33.693 EAL: request: mp_malloc_sync 00:04:33.693 EAL: No shared files mode enabled, IPC is disabled 00:04:33.693 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.693 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.693 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.693 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.694 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.694 00:04:33.694 00:04:33.694 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.694 http://cunit.sourceforge.net/ 00:04:33.694 00:04:33.694 00:04:33.694 Suite: components_suite 00:04:34.266 Test: vtophys_malloc_test ...passed 00:04:34.266 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.266 EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.266 EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.266 EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.266 EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.266 EAL: Trying to obtain current memory policy. 00:04:34.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.266 EAL: Restoring previous memory policy: 4 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.266 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.266 EAL: request: mp_malloc_sync 00:04:34.266 EAL: No shared files mode enabled, IPC is disabled 00:04:34.266 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.528 EAL: Trying to obtain current memory policy. 00:04:34.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.528 EAL: Restoring previous memory policy: 4 00:04:34.528 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.528 EAL: request: mp_malloc_sync 00:04:34.528 EAL: No shared files mode enabled, IPC is disabled 00:04:34.528 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.528 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.528 EAL: request: mp_malloc_sync 00:04:34.528 EAL: No shared files mode enabled, IPC is disabled 00:04:34.528 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.790 EAL: Trying to obtain current memory policy. 00:04:34.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.790 EAL: Restoring previous memory policy: 4 00:04:34.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.790 EAL: request: mp_malloc_sync 00:04:34.790 EAL: No shared files mode enabled, IPC is disabled 00:04:34.790 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.052 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.052 EAL: request: mp_malloc_sync 00:04:35.052 EAL: No shared files mode enabled, IPC is disabled 00:04:35.052 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.313 EAL: Trying to obtain current memory policy. 00:04:35.313 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.574 EAL: Restoring previous memory policy: 4 00:04:35.574 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.574 EAL: request: mp_malloc_sync 00:04:35.574 EAL: No shared files mode enabled, IPC is disabled 00:04:35.574 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.144 EAL: request: mp_malloc_sync 00:04:36.144 EAL: No shared files mode enabled, IPC is disabled 00:04:36.144 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.711 EAL: Trying to obtain current memory policy. 00:04:36.711 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.971 EAL: Restoring previous memory policy: 4 00:04:36.971 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.972 EAL: request: mp_malloc_sync 00:04:36.972 EAL: No shared files mode enabled, IPC is disabled 00:04:36.972 EAL: Heap on socket 0 was expanded by 1026MB 00:04:37.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.907 EAL: request: mp_malloc_sync 00:04:37.907 EAL: No shared files mode enabled, IPC is disabled 00:04:37.907 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.474 passed 00:04:38.474 00:04:38.474 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.474 suites 1 1 n/a 0 0 00:04:38.474 tests 2 2 2 0 0 00:04:38.474 asserts 5810 5810 5810 0 n/a 00:04:38.474 00:04:38.474 Elapsed time = 4.780 seconds 00:04:38.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.474 EAL: request: mp_malloc_sync 00:04:38.474 EAL: No shared files mode enabled, IPC is disabled 00:04:38.474 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.474 EAL: No shared files mode enabled, IPC is disabled 00:04:38.474 EAL: No shared files mode enabled, IPC is disabled 00:04:38.474 EAL: No shared files mode enabled, IPC is disabled 00:04:38.732 00:04:38.732 real 0m5.036s 00:04:38.732 user 0m4.109s 00:04:38.732 sys 0m0.777s 00:04:38.732 19:48:22 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.732 ************************************ 00:04:38.732 END TEST env_vtophys 00:04:38.732 ************************************ 00:04:38.732 19:48:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.732 19:48:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.732 19:48:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.732 19:48:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.732 19:48:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.732 ************************************ 00:04:38.732 START TEST env_pci 00:04:38.732 ************************************ 00:04:38.732 19:48:22 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.732 00:04:38.732 00:04:38.732 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.732 http://cunit.sourceforge.net/ 00:04:38.732 00:04:38.732 00:04:38.732 Suite: pci 00:04:38.732 Test: pci_hook ...[2024-09-30 19:48:22.931160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57334 has claimed it 00:04:38.732 passed 00:04:38.732 00:04:38.732 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.732 suites 1 1 n/a 0 0 00:04:38.732 tests 1 1 1 0 0 00:04:38.732 asserts 25 25 25 0 n/a 00:04:38.733 00:04:38.733 Elapsed time = 0.005 seconds 00:04:38.733 EAL: Cannot find device (10000:00:01.0) 00:04:38.733 EAL: Failed to attach device on primary process 00:04:38.733 00:04:38.733 real 0m0.064s 00:04:38.733 user 0m0.032s 00:04:38.733 sys 0m0.031s 00:04:38.733 19:48:22 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.733 ************************************ 00:04:38.733 END TEST env_pci 00:04:38.733 ************************************ 00:04:38.733 19:48:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.733 19:48:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.733 19:48:23 env -- env/env.sh@15 -- # uname 00:04:38.733 19:48:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.733 19:48:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.733 19:48:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.733 19:48:23 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:38.733 19:48:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.733 19:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.733 ************************************ 00:04:38.733 START TEST env_dpdk_post_init 00:04:38.733 ************************************ 00:04:38.733 19:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.733 EAL: Detected CPU lcores: 10 00:04:38.733 EAL: Detected NUMA nodes: 1 00:04:38.733 EAL: Detected shared linkage of DPDK 00:04:38.733 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.733 EAL: Selected IOVA mode 'PA' 00:04:38.990 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.990 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:38.990 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:38.990 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:38.990 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:38.990 Starting DPDK initialization... 00:04:38.990 Starting SPDK post initialization... 00:04:38.990 SPDK NVMe probe 00:04:38.990 Attaching to 0000:00:10.0 00:04:38.990 Attaching to 0000:00:11.0 00:04:38.990 Attaching to 0000:00:12.0 00:04:38.990 Attaching to 0000:00:13.0 00:04:38.990 Attached to 0000:00:10.0 00:04:38.990 Attached to 0000:00:11.0 00:04:38.990 Attached to 0000:00:13.0 00:04:38.990 Attached to 0000:00:12.0 00:04:38.990 Cleaning up... 00:04:38.990 00:04:38.990 real 0m0.234s 00:04:38.990 user 0m0.073s 00:04:38.990 sys 0m0.062s 00:04:38.990 19:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.990 ************************************ 00:04:38.990 END TEST env_dpdk_post_init 00:04:38.990 ************************************ 00:04:38.990 19:48:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.990 19:48:23 env -- env/env.sh@26 -- # uname 00:04:38.990 19:48:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.990 19:48:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.990 19:48:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.990 19:48:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.990 19:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.990 ************************************ 00:04:38.990 START TEST env_mem_callbacks 00:04:38.990 ************************************ 00:04:38.990 19:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.990 EAL: Detected CPU lcores: 10 00:04:38.990 EAL: Detected NUMA nodes: 1 00:04:38.990 EAL: Detected shared linkage of DPDK 00:04:39.249 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.249 EAL: Selected IOVA mode 'PA' 00:04:39.249 00:04:39.249 00:04:39.249 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.249 http://cunit.sourceforge.net/ 00:04:39.249 00:04:39.249 00:04:39.249 Suite: memory 00:04:39.249 Test: test ... 00:04:39.249 register 0x200000200000 2097152 00:04:39.249 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.249 malloc 3145728 00:04:39.249 register 0x200000400000 4194304 00:04:39.249 buf 0x2000004fffc0 len 3145728 PASSED 00:04:39.249 malloc 64 00:04:39.249 buf 0x2000004ffec0 len 64 PASSED 00:04:39.249 malloc 4194304 00:04:39.249 register 0x200000800000 6291456 00:04:39.249 buf 0x2000009fffc0 len 4194304 PASSED 00:04:39.249 free 0x2000004fffc0 3145728 00:04:39.249 free 0x2000004ffec0 64 00:04:39.249 unregister 0x200000400000 4194304 PASSED 00:04:39.249 free 0x2000009fffc0 4194304 00:04:39.249 unregister 0x200000800000 6291456 PASSED 00:04:39.249 malloc 8388608 00:04:39.249 register 0x200000400000 10485760 00:04:39.249 buf 0x2000005fffc0 len 8388608 PASSED 00:04:39.249 free 0x2000005fffc0 8388608 00:04:39.249 unregister 0x200000400000 10485760 PASSED 00:04:39.249 passed 00:04:39.249 00:04:39.249 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.249 suites 1 1 n/a 0 0 00:04:39.249 tests 1 1 1 0 0 00:04:39.249 asserts 15 15 15 0 n/a 00:04:39.249 00:04:39.249 Elapsed time = 0.046 seconds 00:04:39.249 00:04:39.249 real 0m0.222s 00:04:39.249 user 0m0.066s 00:04:39.249 sys 0m0.053s 00:04:39.249 19:48:23 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.249 ************************************ 00:04:39.249 19:48:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.249 END TEST env_mem_callbacks 00:04:39.249 ************************************ 00:04:39.249 00:04:39.249 real 0m6.279s 00:04:39.249 user 0m4.658s 00:04:39.249 sys 0m1.176s 00:04:39.249 19:48:23 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.249 19:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.249 ************************************ 00:04:39.249 END TEST env 00:04:39.249 ************************************ 00:04:39.507 19:48:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.507 19:48:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.507 19:48:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.507 19:48:23 -- common/autotest_common.sh@10 -- # set +x 00:04:39.507 ************************************ 00:04:39.507 START TEST rpc 00:04:39.507 ************************************ 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.507 * Looking for test storage... 00:04:39.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.507 19:48:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.507 19:48:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.507 19:48:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.507 19:48:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.507 19:48:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.507 19:48:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.507 19:48:23 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.507 19:48:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.507 19:48:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.507 19:48:23 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.507 19:48:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.507 19:48:23 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.507 19:48:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.507 19:48:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.507 19:48:23 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.507 --rc genhtml_branch_coverage=1 00:04:39.507 --rc genhtml_function_coverage=1 00:04:39.507 --rc genhtml_legend=1 00:04:39.507 --rc geninfo_all_blocks=1 00:04:39.507 --rc geninfo_unexecuted_blocks=1 00:04:39.507 00:04:39.507 ' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.507 --rc genhtml_branch_coverage=1 00:04:39.507 --rc genhtml_function_coverage=1 00:04:39.507 --rc genhtml_legend=1 00:04:39.507 --rc geninfo_all_blocks=1 00:04:39.507 --rc geninfo_unexecuted_blocks=1 00:04:39.507 00:04:39.507 ' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.507 --rc genhtml_branch_coverage=1 00:04:39.507 --rc genhtml_function_coverage=1 00:04:39.507 --rc genhtml_legend=1 00:04:39.507 --rc geninfo_all_blocks=1 00:04:39.507 --rc geninfo_unexecuted_blocks=1 00:04:39.507 00:04:39.507 ' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.507 --rc genhtml_branch_coverage=1 00:04:39.507 --rc genhtml_function_coverage=1 00:04:39.507 --rc genhtml_legend=1 00:04:39.507 --rc geninfo_all_blocks=1 00:04:39.507 --rc geninfo_unexecuted_blocks=1 00:04:39.507 00:04:39.507 ' 00:04:39.507 19:48:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57461 00:04:39.507 19:48:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.507 19:48:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.507 19:48:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57461 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@831 -- # '[' -z 57461 ']' 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.507 19:48:23 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.508 19:48:23 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.508 19:48:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.508 [2024-09-30 19:48:23.852851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:39.508 [2024-09-30 19:48:23.852976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57461 ] 00:04:39.766 [2024-09-30 19:48:23.999976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.023 [2024-09-30 19:48:24.175981] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.023 [2024-09-30 19:48:24.176027] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57461' to capture a snapshot of events at runtime. 00:04:40.023 [2024-09-30 19:48:24.176037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:40.023 [2024-09-30 19:48:24.176047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:40.023 [2024-09-30 19:48:24.176054] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57461 for offline analysis/debug. 00:04:40.023 [2024-09-30 19:48:24.176111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.589 19:48:24 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.589 19:48:24 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:40.589 19:48:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.589 19:48:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.589 19:48:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.589 19:48:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.589 19:48:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.589 19:48:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.589 19:48:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.589 ************************************ 00:04:40.589 START TEST rpc_integrity 00:04:40.589 ************************************ 00:04:40.589 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:40.589 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.589 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.589 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.589 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.589 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.589 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.589 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.590 { 00:04:40.590 "name": "Malloc0", 00:04:40.590 "aliases": [ 00:04:40.590 "1208acec-212b-470f-8de3-afb20e8298d6" 00:04:40.590 ], 00:04:40.590 "product_name": "Malloc disk", 00:04:40.590 "block_size": 512, 00:04:40.590 "num_blocks": 16384, 00:04:40.590 "uuid": "1208acec-212b-470f-8de3-afb20e8298d6", 00:04:40.590 "assigned_rate_limits": { 00:04:40.590 "rw_ios_per_sec": 0, 00:04:40.590 "rw_mbytes_per_sec": 0, 00:04:40.590 "r_mbytes_per_sec": 0, 00:04:40.590 "w_mbytes_per_sec": 0 00:04:40.590 }, 00:04:40.590 "claimed": false, 00:04:40.590 "zoned": false, 00:04:40.590 "supported_io_types": { 00:04:40.590 "read": true, 00:04:40.590 "write": true, 00:04:40.590 "unmap": true, 00:04:40.590 "flush": true, 00:04:40.590 "reset": true, 00:04:40.590 "nvme_admin": false, 00:04:40.590 "nvme_io": false, 00:04:40.590 "nvme_io_md": false, 00:04:40.590 "write_zeroes": true, 00:04:40.590 "zcopy": true, 00:04:40.590 "get_zone_info": false, 00:04:40.590 "zone_management": false, 00:04:40.590 "zone_append": false, 00:04:40.590 "compare": false, 00:04:40.590 "compare_and_write": false, 00:04:40.590 "abort": true, 00:04:40.590 "seek_hole": false, 00:04:40.590 "seek_data": false, 00:04:40.590 "copy": true, 00:04:40.590 "nvme_iov_md": false 00:04:40.590 }, 00:04:40.590 "memory_domains": [ 00:04:40.590 { 00:04:40.590 "dma_device_id": "system", 00:04:40.590 "dma_device_type": 1 00:04:40.590 }, 00:04:40.590 { 00:04:40.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.590 "dma_device_type": 2 00:04:40.590 } 00:04:40.590 ], 00:04:40.590 "driver_specific": {} 00:04:40.590 } 00:04:40.590 ]' 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 [2024-09-30 19:48:24.891531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.590 [2024-09-30 19:48:24.891580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.590 [2024-09-30 19:48:24.891604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:40.590 [2024-09-30 19:48:24.891615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.590 [2024-09-30 19:48:24.893789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.590 [2024-09-30 19:48:24.893824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.590 Passthru0 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.590 { 00:04:40.590 "name": "Malloc0", 00:04:40.590 "aliases": [ 00:04:40.590 "1208acec-212b-470f-8de3-afb20e8298d6" 00:04:40.590 ], 00:04:40.590 "product_name": "Malloc disk", 00:04:40.590 "block_size": 512, 00:04:40.590 "num_blocks": 16384, 00:04:40.590 "uuid": "1208acec-212b-470f-8de3-afb20e8298d6", 00:04:40.590 "assigned_rate_limits": { 00:04:40.590 "rw_ios_per_sec": 0, 00:04:40.590 "rw_mbytes_per_sec": 0, 00:04:40.590 "r_mbytes_per_sec": 0, 00:04:40.590 "w_mbytes_per_sec": 0 00:04:40.590 }, 00:04:40.590 "claimed": true, 00:04:40.590 "claim_type": "exclusive_write", 00:04:40.590 "zoned": false, 00:04:40.590 "supported_io_types": { 00:04:40.590 "read": true, 00:04:40.590 "write": true, 00:04:40.590 "unmap": true, 00:04:40.590 "flush": true, 00:04:40.590 "reset": true, 00:04:40.590 "nvme_admin": false, 00:04:40.590 "nvme_io": false, 00:04:40.590 "nvme_io_md": false, 00:04:40.590 "write_zeroes": true, 00:04:40.590 "zcopy": true, 00:04:40.590 "get_zone_info": false, 00:04:40.590 "zone_management": false, 00:04:40.590 "zone_append": false, 00:04:40.590 "compare": false, 00:04:40.590 "compare_and_write": false, 00:04:40.590 "abort": true, 00:04:40.590 "seek_hole": false, 00:04:40.590 "seek_data": false, 00:04:40.590 "copy": true, 00:04:40.590 "nvme_iov_md": false 00:04:40.590 }, 00:04:40.590 "memory_domains": [ 00:04:40.590 { 00:04:40.590 "dma_device_id": "system", 00:04:40.590 "dma_device_type": 1 00:04:40.590 }, 00:04:40.590 { 00:04:40.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.590 "dma_device_type": 2 00:04:40.590 } 00:04:40.590 ], 00:04:40.590 "driver_specific": {} 00:04:40.590 }, 00:04:40.590 { 00:04:40.590 "name": "Passthru0", 00:04:40.590 "aliases": [ 00:04:40.590 "c8573a50-f4d5-54ad-a21a-56520556eccd" 00:04:40.590 ], 00:04:40.590 "product_name": "passthru", 00:04:40.590 "block_size": 512, 00:04:40.590 "num_blocks": 16384, 00:04:40.590 "uuid": "c8573a50-f4d5-54ad-a21a-56520556eccd", 00:04:40.590 "assigned_rate_limits": { 00:04:40.590 "rw_ios_per_sec": 0, 00:04:40.590 "rw_mbytes_per_sec": 0, 00:04:40.590 "r_mbytes_per_sec": 0, 00:04:40.590 "w_mbytes_per_sec": 0 00:04:40.590 }, 00:04:40.590 "claimed": false, 00:04:40.590 "zoned": false, 00:04:40.590 "supported_io_types": { 00:04:40.590 "read": true, 00:04:40.590 "write": true, 00:04:40.590 "unmap": true, 00:04:40.590 "flush": true, 00:04:40.590 "reset": true, 00:04:40.590 "nvme_admin": false, 00:04:40.590 "nvme_io": false, 00:04:40.590 "nvme_io_md": false, 00:04:40.590 "write_zeroes": true, 00:04:40.590 "zcopy": true, 00:04:40.590 "get_zone_info": false, 00:04:40.590 "zone_management": false, 00:04:40.590 "zone_append": false, 00:04:40.590 "compare": false, 00:04:40.590 "compare_and_write": false, 00:04:40.590 "abort": true, 00:04:40.590 "seek_hole": false, 00:04:40.590 "seek_data": false, 00:04:40.590 "copy": true, 00:04:40.590 "nvme_iov_md": false 00:04:40.590 }, 00:04:40.590 "memory_domains": [ 00:04:40.590 { 00:04:40.590 "dma_device_id": "system", 00:04:40.590 "dma_device_type": 1 00:04:40.590 }, 00:04:40.590 { 00:04:40.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.590 "dma_device_type": 2 00:04:40.590 } 00:04:40.590 ], 00:04:40.590 "driver_specific": { 00:04:40.590 "passthru": { 00:04:40.590 "name": "Passthru0", 00:04:40.590 "base_bdev_name": "Malloc0" 00:04:40.590 } 00:04:40.590 } 00:04:40.590 } 00:04:40.590 ]' 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.590 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.590 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.849 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.849 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.849 19:48:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.849 ************************************ 00:04:40.849 END TEST rpc_integrity 00:04:40.849 ************************************ 00:04:40.849 19:48:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.849 00:04:40.849 real 0m0.242s 00:04:40.849 user 0m0.122s 00:04:40.849 sys 0m0.037s 00:04:40.849 19:48:25 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.849 19:48:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.849 19:48:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.849 19:48:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.849 19:48:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 ************************************ 00:04:40.849 START TEST rpc_plugins 00:04:40.849 ************************************ 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.849 { 00:04:40.849 "name": "Malloc1", 00:04:40.849 "aliases": [ 00:04:40.849 "c5ff61ea-cd50-436b-a998-da524be37915" 00:04:40.849 ], 00:04:40.849 "product_name": "Malloc disk", 00:04:40.849 "block_size": 4096, 00:04:40.849 "num_blocks": 256, 00:04:40.849 "uuid": "c5ff61ea-cd50-436b-a998-da524be37915", 00:04:40.849 "assigned_rate_limits": { 00:04:40.849 "rw_ios_per_sec": 0, 00:04:40.849 "rw_mbytes_per_sec": 0, 00:04:40.849 "r_mbytes_per_sec": 0, 00:04:40.849 "w_mbytes_per_sec": 0 00:04:40.849 }, 00:04:40.849 "claimed": false, 00:04:40.849 "zoned": false, 00:04:40.849 "supported_io_types": { 00:04:40.849 "read": true, 00:04:40.849 "write": true, 00:04:40.849 "unmap": true, 00:04:40.849 "flush": true, 00:04:40.849 "reset": true, 00:04:40.849 "nvme_admin": false, 00:04:40.849 "nvme_io": false, 00:04:40.849 "nvme_io_md": false, 00:04:40.849 "write_zeroes": true, 00:04:40.849 "zcopy": true, 00:04:40.849 "get_zone_info": false, 00:04:40.849 "zone_management": false, 00:04:40.849 "zone_append": false, 00:04:40.849 "compare": false, 00:04:40.849 "compare_and_write": false, 00:04:40.849 "abort": true, 00:04:40.849 "seek_hole": false, 00:04:40.849 "seek_data": false, 00:04:40.849 "copy": true, 00:04:40.849 "nvme_iov_md": false 00:04:40.849 }, 00:04:40.849 "memory_domains": [ 00:04:40.849 { 00:04:40.849 "dma_device_id": "system", 00:04:40.849 "dma_device_type": 1 00:04:40.849 }, 00:04:40.849 { 00:04:40.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.849 "dma_device_type": 2 00:04:40.849 } 00:04:40.849 ], 00:04:40.849 "driver_specific": {} 00:04:40.849 } 00:04:40.849 ]' 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.849 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.849 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.850 ************************************ 00:04:40.850 END TEST rpc_plugins 00:04:40.850 ************************************ 00:04:40.850 19:48:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.850 00:04:40.850 real 0m0.116s 00:04:40.850 user 0m0.070s 00:04:40.850 sys 0m0.012s 00:04:40.850 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.850 19:48:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 19:48:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 ************************************ 00:04:41.108 START TEST rpc_trace_cmd_test 00:04:41.108 ************************************ 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.108 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57461", 00:04:41.108 "tpoint_group_mask": "0x8", 00:04:41.108 "iscsi_conn": { 00:04:41.108 "mask": "0x2", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "scsi": { 00:04:41.108 "mask": "0x4", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "bdev": { 00:04:41.108 "mask": "0x8", 00:04:41.108 "tpoint_mask": "0xffffffffffffffff" 00:04:41.108 }, 00:04:41.108 "nvmf_rdma": { 00:04:41.108 "mask": "0x10", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "nvmf_tcp": { 00:04:41.108 "mask": "0x20", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "ftl": { 00:04:41.108 "mask": "0x40", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "blobfs": { 00:04:41.108 "mask": "0x80", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "dsa": { 00:04:41.108 "mask": "0x200", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "thread": { 00:04:41.108 "mask": "0x400", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "nvme_pcie": { 00:04:41.108 "mask": "0x800", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "iaa": { 00:04:41.108 "mask": "0x1000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "nvme_tcp": { 00:04:41.108 "mask": "0x2000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "bdev_nvme": { 00:04:41.108 "mask": "0x4000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "sock": { 00:04:41.108 "mask": "0x8000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "blob": { 00:04:41.108 "mask": "0x10000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 }, 00:04:41.108 "bdev_raid": { 00:04:41.108 "mask": "0x20000", 00:04:41.108 "tpoint_mask": "0x0" 00:04:41.108 } 00:04:41.108 }' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.108 ************************************ 00:04:41.108 END TEST rpc_trace_cmd_test 00:04:41.108 ************************************ 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.108 00:04:41.108 real 0m0.170s 00:04:41.108 user 0m0.133s 00:04:41.108 sys 0m0.025s 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.108 19:48:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 19:48:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.108 19:48:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.108 19:48:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.108 19:48:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.108 ************************************ 00:04:41.108 START TEST rpc_daemon_integrity 00:04:41.108 ************************************ 00:04:41.108 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:41.108 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.108 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.108 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.366 { 00:04:41.366 "name": "Malloc2", 00:04:41.366 "aliases": [ 00:04:41.366 "15df034e-e95f-4bb2-85da-e7f5afbbefa2" 00:04:41.366 ], 00:04:41.366 "product_name": "Malloc disk", 00:04:41.366 "block_size": 512, 00:04:41.366 "num_blocks": 16384, 00:04:41.366 "uuid": "15df034e-e95f-4bb2-85da-e7f5afbbefa2", 00:04:41.366 "assigned_rate_limits": { 00:04:41.366 "rw_ios_per_sec": 0, 00:04:41.366 "rw_mbytes_per_sec": 0, 00:04:41.366 "r_mbytes_per_sec": 0, 00:04:41.366 "w_mbytes_per_sec": 0 00:04:41.366 }, 00:04:41.366 "claimed": false, 00:04:41.366 "zoned": false, 00:04:41.366 "supported_io_types": { 00:04:41.366 "read": true, 00:04:41.366 "write": true, 00:04:41.366 "unmap": true, 00:04:41.366 "flush": true, 00:04:41.366 "reset": true, 00:04:41.366 "nvme_admin": false, 00:04:41.366 "nvme_io": false, 00:04:41.366 "nvme_io_md": false, 00:04:41.366 "write_zeroes": true, 00:04:41.366 "zcopy": true, 00:04:41.366 "get_zone_info": false, 00:04:41.366 "zone_management": false, 00:04:41.366 "zone_append": false, 00:04:41.366 "compare": false, 00:04:41.366 "compare_and_write": false, 00:04:41.366 "abort": true, 00:04:41.366 "seek_hole": false, 00:04:41.366 "seek_data": false, 00:04:41.366 "copy": true, 00:04:41.366 "nvme_iov_md": false 00:04:41.366 }, 00:04:41.366 "memory_domains": [ 00:04:41.366 { 00:04:41.366 "dma_device_id": "system", 00:04:41.366 "dma_device_type": 1 00:04:41.366 }, 00:04:41.366 { 00:04:41.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.366 "dma_device_type": 2 00:04:41.366 } 00:04:41.366 ], 00:04:41.366 "driver_specific": {} 00:04:41.366 } 00:04:41.366 ]' 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.366 [2024-09-30 19:48:25.582253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.366 [2024-09-30 19:48:25.582314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.366 [2024-09-30 19:48:25.582333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:41.366 [2024-09-30 19:48:25.582343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.366 [2024-09-30 19:48:25.584501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.366 [2024-09-30 19:48:25.584534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.366 Passthru0 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.366 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.366 { 00:04:41.366 "name": "Malloc2", 00:04:41.366 "aliases": [ 00:04:41.366 "15df034e-e95f-4bb2-85da-e7f5afbbefa2" 00:04:41.366 ], 00:04:41.366 "product_name": "Malloc disk", 00:04:41.366 "block_size": 512, 00:04:41.366 "num_blocks": 16384, 00:04:41.366 "uuid": "15df034e-e95f-4bb2-85da-e7f5afbbefa2", 00:04:41.366 "assigned_rate_limits": { 00:04:41.366 "rw_ios_per_sec": 0, 00:04:41.366 "rw_mbytes_per_sec": 0, 00:04:41.366 "r_mbytes_per_sec": 0, 00:04:41.366 "w_mbytes_per_sec": 0 00:04:41.367 }, 00:04:41.367 "claimed": true, 00:04:41.367 "claim_type": "exclusive_write", 00:04:41.367 "zoned": false, 00:04:41.367 "supported_io_types": { 00:04:41.367 "read": true, 00:04:41.367 "write": true, 00:04:41.367 "unmap": true, 00:04:41.367 "flush": true, 00:04:41.367 "reset": true, 00:04:41.367 "nvme_admin": false, 00:04:41.367 "nvme_io": false, 00:04:41.367 "nvme_io_md": false, 00:04:41.367 "write_zeroes": true, 00:04:41.367 "zcopy": true, 00:04:41.367 "get_zone_info": false, 00:04:41.367 "zone_management": false, 00:04:41.367 "zone_append": false, 00:04:41.367 "compare": false, 00:04:41.367 "compare_and_write": false, 00:04:41.367 "abort": true, 00:04:41.367 "seek_hole": false, 00:04:41.367 "seek_data": false, 00:04:41.367 "copy": true, 00:04:41.367 "nvme_iov_md": false 00:04:41.367 }, 00:04:41.367 "memory_domains": [ 00:04:41.367 { 00:04:41.367 "dma_device_id": "system", 00:04:41.367 "dma_device_type": 1 00:04:41.367 }, 00:04:41.367 { 00:04:41.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.367 "dma_device_type": 2 00:04:41.367 } 00:04:41.367 ], 00:04:41.367 "driver_specific": {} 00:04:41.367 }, 00:04:41.367 { 00:04:41.367 "name": "Passthru0", 00:04:41.367 "aliases": [ 00:04:41.367 "9c9600cb-6ba2-5f90-b36f-2f40553e889e" 00:04:41.367 ], 00:04:41.367 "product_name": "passthru", 00:04:41.367 "block_size": 512, 00:04:41.367 "num_blocks": 16384, 00:04:41.367 "uuid": "9c9600cb-6ba2-5f90-b36f-2f40553e889e", 00:04:41.367 "assigned_rate_limits": { 00:04:41.367 "rw_ios_per_sec": 0, 00:04:41.367 "rw_mbytes_per_sec": 0, 00:04:41.367 "r_mbytes_per_sec": 0, 00:04:41.367 "w_mbytes_per_sec": 0 00:04:41.367 }, 00:04:41.367 "claimed": false, 00:04:41.367 "zoned": false, 00:04:41.367 "supported_io_types": { 00:04:41.367 "read": true, 00:04:41.367 "write": true, 00:04:41.367 "unmap": true, 00:04:41.367 "flush": true, 00:04:41.367 "reset": true, 00:04:41.367 "nvme_admin": false, 00:04:41.367 "nvme_io": false, 00:04:41.367 "nvme_io_md": false, 00:04:41.367 "write_zeroes": true, 00:04:41.367 "zcopy": true, 00:04:41.367 "get_zone_info": false, 00:04:41.367 "zone_management": false, 00:04:41.367 "zone_append": false, 00:04:41.367 "compare": false, 00:04:41.367 "compare_and_write": false, 00:04:41.367 "abort": true, 00:04:41.367 "seek_hole": false, 00:04:41.367 "seek_data": false, 00:04:41.367 "copy": true, 00:04:41.367 "nvme_iov_md": false 00:04:41.367 }, 00:04:41.367 "memory_domains": [ 00:04:41.367 { 00:04:41.367 "dma_device_id": "system", 00:04:41.367 "dma_device_type": 1 00:04:41.367 }, 00:04:41.367 { 00:04:41.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.367 "dma_device_type": 2 00:04:41.367 } 00:04:41.367 ], 00:04:41.367 "driver_specific": { 00:04:41.367 "passthru": { 00:04:41.367 "name": "Passthru0", 00:04:41.367 "base_bdev_name": "Malloc2" 00:04:41.367 } 00:04:41.367 } 00:04:41.367 } 00:04:41.367 ]' 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.367 ************************************ 00:04:41.367 END TEST rpc_daemon_integrity 00:04:41.367 ************************************ 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.367 00:04:41.367 real 0m0.247s 00:04:41.367 user 0m0.139s 00:04:41.367 sys 0m0.026s 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.367 19:48:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.625 19:48:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.625 19:48:25 rpc -- rpc/rpc.sh@84 -- # killprocess 57461 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 57461 ']' 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@954 -- # kill -0 57461 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57461 00:04:41.625 killing process with pid 57461 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57461' 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@969 -- # kill 57461 00:04:41.625 19:48:25 rpc -- common/autotest_common.sh@974 -- # wait 57461 00:04:43.065 ************************************ 00:04:43.065 END TEST rpc 00:04:43.065 ************************************ 00:04:43.065 00:04:43.065 real 0m3.665s 00:04:43.065 user 0m4.068s 00:04:43.065 sys 0m0.608s 00:04:43.065 19:48:27 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.065 19:48:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 19:48:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.065 19:48:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.065 19:48:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.065 19:48:27 -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 ************************************ 00:04:43.065 START TEST skip_rpc 00:04:43.065 ************************************ 00:04:43.065 19:48:27 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.324 * Looking for test storage... 00:04:43.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.324 19:48:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.324 --rc genhtml_branch_coverage=1 00:04:43.324 --rc genhtml_function_coverage=1 00:04:43.324 --rc genhtml_legend=1 00:04:43.324 --rc geninfo_all_blocks=1 00:04:43.324 --rc geninfo_unexecuted_blocks=1 00:04:43.324 00:04:43.324 ' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.324 --rc genhtml_branch_coverage=1 00:04:43.324 --rc genhtml_function_coverage=1 00:04:43.324 --rc genhtml_legend=1 00:04:43.324 --rc geninfo_all_blocks=1 00:04:43.324 --rc geninfo_unexecuted_blocks=1 00:04:43.324 00:04:43.324 ' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.324 --rc genhtml_branch_coverage=1 00:04:43.324 --rc genhtml_function_coverage=1 00:04:43.324 --rc genhtml_legend=1 00:04:43.324 --rc geninfo_all_blocks=1 00:04:43.324 --rc geninfo_unexecuted_blocks=1 00:04:43.324 00:04:43.324 ' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.324 --rc genhtml_branch_coverage=1 00:04:43.324 --rc genhtml_function_coverage=1 00:04:43.324 --rc genhtml_legend=1 00:04:43.324 --rc geninfo_all_blocks=1 00:04:43.324 --rc geninfo_unexecuted_blocks=1 00:04:43.324 00:04:43.324 ' 00:04:43.324 19:48:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.324 19:48:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.324 19:48:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.324 19:48:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.324 ************************************ 00:04:43.324 START TEST skip_rpc 00:04:43.324 ************************************ 00:04:43.324 19:48:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:43.324 19:48:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57674 00:04:43.324 19:48:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.324 19:48:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:43.324 19:48:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:43.324 [2024-09-30 19:48:27.605702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:43.324 [2024-09-30 19:48:27.605819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:04:43.582 [2024-09-30 19:48:27.752972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.582 [2024-09-30 19:48:27.897919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57674 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57674 ']' 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57674 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57674 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.844 killing process with pid 57674 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57674' 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57674 00:04:48.844 19:48:32 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57674 00:04:49.777 00:04:49.777 real 0m6.288s 00:04:49.777 user 0m5.935s 00:04:49.777 sys 0m0.254s 00:04:49.777 19:48:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.777 ************************************ 00:04:49.777 END TEST skip_rpc 00:04:49.777 19:48:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 ************************************ 00:04:49.777 19:48:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:49.777 19:48:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.777 19:48:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.777 19:48:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 ************************************ 00:04:49.777 START TEST skip_rpc_with_json 00:04:49.777 ************************************ 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57767 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57767 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57767 ']' 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.777 19:48:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 [2024-09-30 19:48:33.946070] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:49.777 [2024-09-30 19:48:33.946188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57767 ] 00:04:49.777 [2024-09-30 19:48:34.094696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.036 [2024-09-30 19:48:34.233257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.603 [2024-09-30 19:48:34.777222] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:50.603 request: 00:04:50.603 { 00:04:50.603 "trtype": "tcp", 00:04:50.603 "method": "nvmf_get_transports", 00:04:50.603 "req_id": 1 00:04:50.603 } 00:04:50.603 Got JSON-RPC error response 00:04:50.603 response: 00:04:50.603 { 00:04:50.603 "code": -19, 00:04:50.603 "message": "No such device" 00:04:50.603 } 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.603 [2024-09-30 19:48:34.789307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.603 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.603 { 00:04:50.603 "subsystems": [ 00:04:50.603 { 00:04:50.603 "subsystem": "fsdev", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "fsdev_set_opts", 00:04:50.603 "params": { 00:04:50.603 "fsdev_io_pool_size": 65535, 00:04:50.603 "fsdev_io_cache_size": 256 00:04:50.603 } 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "keyring", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "iobuf", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "iobuf_set_options", 00:04:50.603 "params": { 00:04:50.603 "small_pool_count": 8192, 00:04:50.603 "large_pool_count": 1024, 00:04:50.603 "small_bufsize": 8192, 00:04:50.603 "large_bufsize": 135168 00:04:50.603 } 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "sock", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "sock_set_default_impl", 00:04:50.603 "params": { 00:04:50.603 "impl_name": "posix" 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "sock_impl_set_options", 00:04:50.603 "params": { 00:04:50.603 "impl_name": "ssl", 00:04:50.603 "recv_buf_size": 4096, 00:04:50.603 "send_buf_size": 4096, 00:04:50.603 "enable_recv_pipe": true, 00:04:50.603 "enable_quickack": false, 00:04:50.603 "enable_placement_id": 0, 00:04:50.603 "enable_zerocopy_send_server": true, 00:04:50.603 "enable_zerocopy_send_client": false, 00:04:50.603 "zerocopy_threshold": 0, 00:04:50.603 "tls_version": 0, 00:04:50.603 "enable_ktls": false 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "sock_impl_set_options", 00:04:50.603 "params": { 00:04:50.603 "impl_name": "posix", 00:04:50.603 "recv_buf_size": 2097152, 00:04:50.603 "send_buf_size": 2097152, 00:04:50.603 "enable_recv_pipe": true, 00:04:50.603 "enable_quickack": false, 00:04:50.603 "enable_placement_id": 0, 00:04:50.603 "enable_zerocopy_send_server": true, 00:04:50.603 "enable_zerocopy_send_client": false, 00:04:50.603 "zerocopy_threshold": 0, 00:04:50.603 "tls_version": 0, 00:04:50.603 "enable_ktls": false 00:04:50.603 } 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "vmd", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "accel", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "accel_set_options", 00:04:50.603 "params": { 00:04:50.603 "small_cache_size": 128, 00:04:50.603 "large_cache_size": 16, 00:04:50.603 "task_count": 2048, 00:04:50.603 "sequence_count": 2048, 00:04:50.603 "buf_count": 2048 00:04:50.603 } 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "bdev", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "bdev_set_options", 00:04:50.603 "params": { 00:04:50.603 "bdev_io_pool_size": 65535, 00:04:50.603 "bdev_io_cache_size": 256, 00:04:50.603 "bdev_auto_examine": true, 00:04:50.603 "iobuf_small_cache_size": 128, 00:04:50.603 "iobuf_large_cache_size": 16 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "bdev_raid_set_options", 00:04:50.603 "params": { 00:04:50.603 "process_window_size_kb": 1024, 00:04:50.603 "process_max_bandwidth_mb_sec": 0 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "bdev_iscsi_set_options", 00:04:50.603 "params": { 00:04:50.603 "timeout_sec": 30 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "bdev_nvme_set_options", 00:04:50.603 "params": { 00:04:50.603 "action_on_timeout": "none", 00:04:50.603 "timeout_us": 0, 00:04:50.603 "timeout_admin_us": 0, 00:04:50.603 "keep_alive_timeout_ms": 10000, 00:04:50.603 "arbitration_burst": 0, 00:04:50.603 "low_priority_weight": 0, 00:04:50.603 "medium_priority_weight": 0, 00:04:50.603 "high_priority_weight": 0, 00:04:50.603 "nvme_adminq_poll_period_us": 10000, 00:04:50.603 "nvme_ioq_poll_period_us": 0, 00:04:50.603 "io_queue_requests": 0, 00:04:50.603 "delay_cmd_submit": true, 00:04:50.603 "transport_retry_count": 4, 00:04:50.603 "bdev_retry_count": 3, 00:04:50.603 "transport_ack_timeout": 0, 00:04:50.603 "ctrlr_loss_timeout_sec": 0, 00:04:50.603 "reconnect_delay_sec": 0, 00:04:50.603 "fast_io_fail_timeout_sec": 0, 00:04:50.603 "disable_auto_failback": false, 00:04:50.603 "generate_uuids": false, 00:04:50.603 "transport_tos": 0, 00:04:50.603 "nvme_error_stat": false, 00:04:50.603 "rdma_srq_size": 0, 00:04:50.603 "io_path_stat": false, 00:04:50.603 "allow_accel_sequence": false, 00:04:50.603 "rdma_max_cq_size": 0, 00:04:50.603 "rdma_cm_event_timeout_ms": 0, 00:04:50.603 "dhchap_digests": [ 00:04:50.603 "sha256", 00:04:50.603 "sha384", 00:04:50.603 "sha512" 00:04:50.603 ], 00:04:50.603 "dhchap_dhgroups": [ 00:04:50.603 "null", 00:04:50.603 "ffdhe2048", 00:04:50.603 "ffdhe3072", 00:04:50.603 "ffdhe4096", 00:04:50.603 "ffdhe6144", 00:04:50.603 "ffdhe8192" 00:04:50.603 ] 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "bdev_nvme_set_hotplug", 00:04:50.603 "params": { 00:04:50.603 "period_us": 100000, 00:04:50.603 "enable": false 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "bdev_wait_for_examine" 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "scsi", 00:04:50.603 "config": null 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "scheduler", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "framework_set_scheduler", 00:04:50.603 "params": { 00:04:50.603 "name": "static" 00:04:50.603 } 00:04:50.603 } 00:04:50.603 ] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "vhost_scsi", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "vhost_blk", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "ublk", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "nbd", 00:04:50.603 "config": [] 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "subsystem": "nvmf", 00:04:50.603 "config": [ 00:04:50.603 { 00:04:50.603 "method": "nvmf_set_config", 00:04:50.603 "params": { 00:04:50.603 "discovery_filter": "match_any", 00:04:50.603 "admin_cmd_passthru": { 00:04:50.603 "identify_ctrlr": false 00:04:50.603 }, 00:04:50.603 "dhchap_digests": [ 00:04:50.603 "sha256", 00:04:50.603 "sha384", 00:04:50.603 "sha512" 00:04:50.603 ], 00:04:50.603 "dhchap_dhgroups": [ 00:04:50.603 "null", 00:04:50.603 "ffdhe2048", 00:04:50.603 "ffdhe3072", 00:04:50.603 "ffdhe4096", 00:04:50.603 "ffdhe6144", 00:04:50.603 "ffdhe8192" 00:04:50.603 ] 00:04:50.603 } 00:04:50.603 }, 00:04:50.603 { 00:04:50.603 "method": "nvmf_set_max_subsystems", 00:04:50.603 "params": { 00:04:50.603 "max_subsystems": 1024 00:04:50.604 } 00:04:50.604 }, 00:04:50.604 { 00:04:50.604 "method": "nvmf_set_crdt", 00:04:50.604 "params": { 00:04:50.604 "crdt1": 0, 00:04:50.604 "crdt2": 0, 00:04:50.604 "crdt3": 0 00:04:50.604 } 00:04:50.604 }, 00:04:50.604 { 00:04:50.604 "method": "nvmf_create_transport", 00:04:50.604 "params": { 00:04:50.604 "trtype": "TCP", 00:04:50.604 "max_queue_depth": 128, 00:04:50.604 "max_io_qpairs_per_ctrlr": 127, 00:04:50.604 "in_capsule_data_size": 4096, 00:04:50.604 "max_io_size": 131072, 00:04:50.604 "io_unit_size": 131072, 00:04:50.604 "max_aq_depth": 128, 00:04:50.604 "num_shared_buffers": 511, 00:04:50.604 "buf_cache_size": 4294967295, 00:04:50.604 "dif_insert_or_strip": false, 00:04:50.604 "zcopy": false, 00:04:50.604 "c2h_success": true, 00:04:50.604 "sock_priority": 0, 00:04:50.604 "abort_timeout_sec": 1, 00:04:50.604 "ack_timeout": 0, 00:04:50.604 "data_wr_pool_size": 0 00:04:50.604 } 00:04:50.604 } 00:04:50.604 ] 00:04:50.604 }, 00:04:50.604 { 00:04:50.604 "subsystem": "iscsi", 00:04:50.604 "config": [ 00:04:50.604 { 00:04:50.604 "method": "iscsi_set_options", 00:04:50.604 "params": { 00:04:50.604 "node_base": "iqn.2016-06.io.spdk", 00:04:50.604 "max_sessions": 128, 00:04:50.604 "max_connections_per_session": 2, 00:04:50.604 "max_queue_depth": 64, 00:04:50.604 "default_time2wait": 2, 00:04:50.604 "default_time2retain": 20, 00:04:50.604 "first_burst_length": 8192, 00:04:50.604 "immediate_data": true, 00:04:50.604 "allow_duplicated_isid": false, 00:04:50.604 "error_recovery_level": 0, 00:04:50.604 "nop_timeout": 60, 00:04:50.604 "nop_in_interval": 30, 00:04:50.604 "disable_chap": false, 00:04:50.604 "require_chap": false, 00:04:50.604 "mutual_chap": false, 00:04:50.604 "chap_group": 0, 00:04:50.604 "max_large_datain_per_connection": 64, 00:04:50.604 "max_r2t_per_connection": 4, 00:04:50.604 "pdu_pool_size": 36864, 00:04:50.604 "immediate_data_pool_size": 16384, 00:04:50.604 "data_out_pool_size": 2048 00:04:50.604 } 00:04:50.604 } 00:04:50.604 ] 00:04:50.604 } 00:04:50.604 ] 00:04:50.604 } 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57767 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57767 ']' 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57767 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.604 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57767 00:04:50.862 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.862 killing process with pid 57767 00:04:50.862 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.862 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57767' 00:04:50.863 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57767 00:04:50.863 19:48:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57767 00:04:52.236 19:48:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57806 00:04:52.236 19:48:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:52.236 19:48:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57806 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57806 ']' 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57806 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57806 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.497 killing process with pid 57806 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57806' 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57806 00:04:57.497 19:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57806 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.432 00:04:58.432 real 0m8.639s 00:04:58.432 user 0m8.274s 00:04:58.432 sys 0m0.584s 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.432 ************************************ 00:04:58.432 END TEST skip_rpc_with_json 00:04:58.432 ************************************ 00:04:58.432 19:48:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.432 19:48:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.432 19:48:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.432 19:48:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.432 ************************************ 00:04:58.432 START TEST skip_rpc_with_delay 00:04:58.432 ************************************ 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.432 [2024-09-30 19:48:42.649546] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.432 [2024-09-30 19:48:42.649665] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.432 00:04:58.432 real 0m0.127s 00:04:58.432 user 0m0.061s 00:04:58.432 sys 0m0.065s 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.432 19:48:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:58.432 ************************************ 00:04:58.432 END TEST skip_rpc_with_delay 00:04:58.432 ************************************ 00:04:58.432 19:48:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:58.432 19:48:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:58.432 19:48:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:58.433 19:48:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.433 19:48:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.433 19:48:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.433 ************************************ 00:04:58.433 START TEST exit_on_failed_rpc_init 00:04:58.433 ************************************ 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57929 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57929 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57929 ']' 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.433 19:48:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.691 [2024-09-30 19:48:42.823765] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.691 [2024-09-30 19:48:42.823889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57929 ] 00:04:58.691 [2024-09-30 19:48:42.973330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.949 [2024-09-30 19:48:43.152820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.515 19:48:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.515 [2024-09-30 19:48:43.820490] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:59.515 [2024-09-30 19:48:43.820617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:04:59.774 [2024-09-30 19:48:43.972438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.032 [2024-09-30 19:48:44.150512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.032 [2024-09-30 19:48:44.150599] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:00.032 [2024-09-30 19:48:44.150613] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:00.032 [2024-09-30 19:48:44.150623] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57929 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57929 ']' 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57929 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57929 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.290 killing process with pid 57929 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57929' 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57929 00:05:00.290 19:48:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57929 00:05:01.664 00:05:01.664 real 0m3.214s 00:05:01.664 user 0m3.671s 00:05:01.664 sys 0m0.433s 00:05:01.664 19:48:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.664 ************************************ 00:05:01.664 END TEST exit_on_failed_rpc_init 00:05:01.664 ************************************ 00:05:01.664 19:48:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.664 19:48:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.664 00:05:01.664 real 0m18.647s 00:05:01.664 user 0m18.084s 00:05:01.664 sys 0m1.518s 00:05:01.664 19:48:46 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.664 19:48:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.664 ************************************ 00:05:01.664 END TEST skip_rpc 00:05:01.664 ************************************ 00:05:01.923 19:48:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.923 19:48:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.923 19:48:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.923 19:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:01.923 ************************************ 00:05:01.923 START TEST rpc_client 00:05:01.923 ************************************ 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.923 * Looking for test storage... 00:05:01.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.923 19:48:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.923 --rc genhtml_branch_coverage=1 00:05:01.923 --rc genhtml_function_coverage=1 00:05:01.923 --rc genhtml_legend=1 00:05:01.923 --rc geninfo_all_blocks=1 00:05:01.923 --rc geninfo_unexecuted_blocks=1 00:05:01.923 00:05:01.923 ' 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.923 --rc genhtml_branch_coverage=1 00:05:01.923 --rc genhtml_function_coverage=1 00:05:01.923 --rc genhtml_legend=1 00:05:01.923 --rc geninfo_all_blocks=1 00:05:01.923 --rc geninfo_unexecuted_blocks=1 00:05:01.923 00:05:01.923 ' 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.923 --rc genhtml_branch_coverage=1 00:05:01.923 --rc genhtml_function_coverage=1 00:05:01.923 --rc genhtml_legend=1 00:05:01.923 --rc geninfo_all_blocks=1 00:05:01.923 --rc geninfo_unexecuted_blocks=1 00:05:01.923 00:05:01.923 ' 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.923 --rc genhtml_branch_coverage=1 00:05:01.923 --rc genhtml_function_coverage=1 00:05:01.923 --rc genhtml_legend=1 00:05:01.923 --rc geninfo_all_blocks=1 00:05:01.923 --rc geninfo_unexecuted_blocks=1 00:05:01.923 00:05:01.923 ' 00:05:01.923 19:48:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:01.923 OK 00:05:01.923 19:48:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.923 00:05:01.923 real 0m0.189s 00:05:01.923 user 0m0.107s 00:05:01.923 sys 0m0.091s 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.923 ************************************ 00:05:01.923 END TEST rpc_client 00:05:01.923 ************************************ 00:05:01.923 19:48:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.923 19:48:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.923 19:48:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.923 19:48:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.923 19:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:02.182 ************************************ 00:05:02.182 START TEST json_config 00:05:02.182 ************************************ 00:05:02.182 19:48:46 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.182 19:48:46 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.183 19:48:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.183 19:48:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.183 19:48:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.183 19:48:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.183 19:48:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.183 19:48:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.183 19:48:46 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.183 19:48:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.183 19:48:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.183 19:48:46 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.183 19:48:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.183 19:48:46 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.183 19:48:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.183 19:48:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.183 19:48:46 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.183 --rc genhtml_branch_coverage=1 00:05:02.183 --rc genhtml_function_coverage=1 00:05:02.183 --rc genhtml_legend=1 00:05:02.183 --rc geninfo_all_blocks=1 00:05:02.183 --rc geninfo_unexecuted_blocks=1 00:05:02.183 00:05:02.183 ' 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.183 --rc genhtml_branch_coverage=1 00:05:02.183 --rc genhtml_function_coverage=1 00:05:02.183 --rc genhtml_legend=1 00:05:02.183 --rc geninfo_all_blocks=1 00:05:02.183 --rc geninfo_unexecuted_blocks=1 00:05:02.183 00:05:02.183 ' 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.183 --rc genhtml_branch_coverage=1 00:05:02.183 --rc genhtml_function_coverage=1 00:05:02.183 --rc genhtml_legend=1 00:05:02.183 --rc geninfo_all_blocks=1 00:05:02.183 --rc geninfo_unexecuted_blocks=1 00:05:02.183 00:05:02.183 ' 00:05:02.183 19:48:46 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.183 --rc genhtml_branch_coverage=1 00:05:02.183 --rc genhtml_function_coverage=1 00:05:02.183 --rc genhtml_legend=1 00:05:02.183 --rc geninfo_all_blocks=1 00:05:02.183 --rc geninfo_unexecuted_blocks=1 00:05:02.183 00:05:02.183 ' 00:05:02.183 19:48:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bef6a72b-d837-4d7e-b594-b92515d61423 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bef6a72b-d837-4d7e-b594-b92515d61423 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.183 19:48:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.183 19:48:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.183 19:48:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.183 19:48:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.183 19:48:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.183 19:48:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.183 19:48:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.183 19:48:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.183 19:48:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.183 19:48:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.184 19:48:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.184 19:48:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.184 19:48:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.184 WARNING: No tests are enabled so not running JSON configuration tests 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:02.184 19:48:46 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:02.184 00:05:02.184 real 0m0.141s 00:05:02.184 user 0m0.089s 00:05:02.184 sys 0m0.054s 00:05:02.184 19:48:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.184 ************************************ 00:05:02.184 END TEST json_config 00:05:02.184 ************************************ 00:05:02.184 19:48:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.184 19:48:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.184 19:48:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.184 19:48:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.184 19:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:02.184 ************************************ 00:05:02.184 START TEST json_config_extra_key 00:05:02.184 ************************************ 00:05:02.184 19:48:46 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.184 19:48:46 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.184 19:48:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.184 19:48:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.443 --rc genhtml_branch_coverage=1 00:05:02.443 --rc genhtml_function_coverage=1 00:05:02.443 --rc genhtml_legend=1 00:05:02.443 --rc geninfo_all_blocks=1 00:05:02.443 --rc geninfo_unexecuted_blocks=1 00:05:02.443 00:05:02.443 ' 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.443 --rc genhtml_branch_coverage=1 00:05:02.443 --rc genhtml_function_coverage=1 00:05:02.443 --rc genhtml_legend=1 00:05:02.443 --rc geninfo_all_blocks=1 00:05:02.443 --rc geninfo_unexecuted_blocks=1 00:05:02.443 00:05:02.443 ' 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.443 --rc genhtml_branch_coverage=1 00:05:02.443 --rc genhtml_function_coverage=1 00:05:02.443 --rc genhtml_legend=1 00:05:02.443 --rc geninfo_all_blocks=1 00:05:02.443 --rc geninfo_unexecuted_blocks=1 00:05:02.443 00:05:02.443 ' 00:05:02.443 19:48:46 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.443 --rc genhtml_branch_coverage=1 00:05:02.443 --rc genhtml_function_coverage=1 00:05:02.443 --rc genhtml_legend=1 00:05:02.443 --rc geninfo_all_blocks=1 00:05:02.443 --rc geninfo_unexecuted_blocks=1 00:05:02.443 00:05:02.443 ' 00:05:02.443 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bef6a72b-d837-4d7e-b594-b92515d61423 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bef6a72b-d837-4d7e-b594-b92515d61423 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.443 19:48:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.443 19:48:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.443 19:48:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.443 19:48:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.444 19:48:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.444 19:48:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.444 19:48:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.444 19:48:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.444 INFO: launching applications... 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.444 19:48:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.444 Waiting for target to run... 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58140 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58140 /var/tmp/spdk_tgt.sock 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58140 ']' 00:05:02.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.444 19:48:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.444 19:48:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.444 [2024-09-30 19:48:46.700981] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:02.444 [2024-09-30 19:48:46.701105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:05:02.702 [2024-09-30 19:48:47.002429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.960 [2024-09-30 19:48:47.170810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.525 19:48:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.525 19:48:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:03.525 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:03.525 INFO: shutting down applications... 00:05:03.525 19:48:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:03.525 19:48:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58140 ]] 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58140 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58140 00:05:03.525 19:48:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.089 19:48:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.089 19:48:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.089 19:48:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58140 00:05:04.089 19:48:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.347 19:48:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.347 19:48:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.347 19:48:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58140 00:05:04.347 19:48:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.915 19:48:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.915 19:48:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.915 19:48:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58140 00:05:04.915 19:48:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58140 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.482 SPDK target shutdown done 00:05:05.482 19:48:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.482 Success 00:05:05.482 19:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:05.482 00:05:05.482 real 0m3.208s 00:05:05.482 user 0m2.752s 00:05:05.482 sys 0m0.406s 00:05:05.482 19:48:49 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.482 ************************************ 00:05:05.482 END TEST json_config_extra_key 00:05:05.482 ************************************ 00:05:05.482 19:48:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.482 19:48:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.482 19:48:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.482 19:48:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.482 19:48:49 -- common/autotest_common.sh@10 -- # set +x 00:05:05.482 ************************************ 00:05:05.482 START TEST alias_rpc 00:05:05.482 ************************************ 00:05:05.482 19:48:49 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.482 * Looking for test storage... 00:05:05.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:05.482 19:48:49 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.482 19:48:49 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.482 19:48:49 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.740 19:48:49 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.740 19:48:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.740 19:48:49 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.740 19:48:49 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.740 --rc genhtml_branch_coverage=1 00:05:05.741 --rc genhtml_function_coverage=1 00:05:05.741 --rc genhtml_legend=1 00:05:05.741 --rc geninfo_all_blocks=1 00:05:05.741 --rc geninfo_unexecuted_blocks=1 00:05:05.741 00:05:05.741 ' 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.741 --rc genhtml_branch_coverage=1 00:05:05.741 --rc genhtml_function_coverage=1 00:05:05.741 --rc genhtml_legend=1 00:05:05.741 --rc geninfo_all_blocks=1 00:05:05.741 --rc geninfo_unexecuted_blocks=1 00:05:05.741 00:05:05.741 ' 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.741 --rc genhtml_branch_coverage=1 00:05:05.741 --rc genhtml_function_coverage=1 00:05:05.741 --rc genhtml_legend=1 00:05:05.741 --rc geninfo_all_blocks=1 00:05:05.741 --rc geninfo_unexecuted_blocks=1 00:05:05.741 00:05:05.741 ' 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.741 --rc genhtml_branch_coverage=1 00:05:05.741 --rc genhtml_function_coverage=1 00:05:05.741 --rc genhtml_legend=1 00:05:05.741 --rc geninfo_all_blocks=1 00:05:05.741 --rc geninfo_unexecuted_blocks=1 00:05:05.741 00:05:05.741 ' 00:05:05.741 19:48:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.741 19:48:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.741 19:48:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58240 00:05:05.741 19:48:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58240 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58240 ']' 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.741 19:48:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.741 [2024-09-30 19:48:49.963293] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.741 [2024-09-30 19:48:49.963957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:05:05.999 [2024-09-30 19:48:50.106716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.999 [2024-09-30 19:48:50.330158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.934 19:48:50 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.934 19:48:50 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:06.934 19:48:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:06.934 19:48:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58240 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58240 ']' 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58240 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58240 00:05:06.934 killing process with pid 58240 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58240' 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 58240 00:05:06.934 19:48:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 58240 00:05:08.836 ************************************ 00:05:08.836 END TEST alias_rpc 00:05:08.836 ************************************ 00:05:08.836 00:05:08.836 real 0m2.934s 00:05:08.836 user 0m2.945s 00:05:08.836 sys 0m0.476s 00:05:08.836 19:48:52 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.836 19:48:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.836 19:48:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:08.836 19:48:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:08.836 19:48:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.836 19:48:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.836 19:48:52 -- common/autotest_common.sh@10 -- # set +x 00:05:08.836 ************************************ 00:05:08.836 START TEST spdkcli_tcp 00:05:08.836 ************************************ 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:08.836 * Looking for test storage... 00:05:08.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.836 19:48:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.836 --rc genhtml_branch_coverage=1 00:05:08.836 --rc genhtml_function_coverage=1 00:05:08.836 --rc genhtml_legend=1 00:05:08.836 --rc geninfo_all_blocks=1 00:05:08.836 --rc geninfo_unexecuted_blocks=1 00:05:08.836 00:05:08.836 ' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.836 --rc genhtml_branch_coverage=1 00:05:08.836 --rc genhtml_function_coverage=1 00:05:08.836 --rc genhtml_legend=1 00:05:08.836 --rc geninfo_all_blocks=1 00:05:08.836 --rc geninfo_unexecuted_blocks=1 00:05:08.836 00:05:08.836 ' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.836 --rc genhtml_branch_coverage=1 00:05:08.836 --rc genhtml_function_coverage=1 00:05:08.836 --rc genhtml_legend=1 00:05:08.836 --rc geninfo_all_blocks=1 00:05:08.836 --rc geninfo_unexecuted_blocks=1 00:05:08.836 00:05:08.836 ' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.836 --rc genhtml_branch_coverage=1 00:05:08.836 --rc genhtml_function_coverage=1 00:05:08.836 --rc genhtml_legend=1 00:05:08.836 --rc geninfo_all_blocks=1 00:05:08.836 --rc geninfo_unexecuted_blocks=1 00:05:08.836 00:05:08.836 ' 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58335 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58335 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58335 ']' 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.836 19:48:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.836 19:48:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.836 [2024-09-30 19:48:52.962212] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:08.836 [2024-09-30 19:48:52.962497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ] 00:05:08.836 [2024-09-30 19:48:53.102587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.095 [2024-09-30 19:48:53.289081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.095 [2024-09-30 19:48:53.289149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.661 19:48:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.661 19:48:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:09.661 19:48:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58347 00:05:09.661 19:48:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.661 19:48:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:09.920 [ 00:05:09.920 "bdev_malloc_delete", 00:05:09.920 "bdev_malloc_create", 00:05:09.920 "bdev_null_resize", 00:05:09.920 "bdev_null_delete", 00:05:09.920 "bdev_null_create", 00:05:09.920 "bdev_nvme_cuse_unregister", 00:05:09.920 "bdev_nvme_cuse_register", 00:05:09.920 "bdev_opal_new_user", 00:05:09.920 "bdev_opal_set_lock_state", 00:05:09.920 "bdev_opal_delete", 00:05:09.920 "bdev_opal_get_info", 00:05:09.920 "bdev_opal_create", 00:05:09.920 "bdev_nvme_opal_revert", 00:05:09.920 "bdev_nvme_opal_init", 00:05:09.920 "bdev_nvme_send_cmd", 00:05:09.920 "bdev_nvme_set_keys", 00:05:09.920 "bdev_nvme_get_path_iostat", 00:05:09.920 "bdev_nvme_get_mdns_discovery_info", 00:05:09.920 "bdev_nvme_stop_mdns_discovery", 00:05:09.920 "bdev_nvme_start_mdns_discovery", 00:05:09.920 "bdev_nvme_set_multipath_policy", 00:05:09.920 "bdev_nvme_set_preferred_path", 00:05:09.920 "bdev_nvme_get_io_paths", 00:05:09.920 "bdev_nvme_remove_error_injection", 00:05:09.920 "bdev_nvme_add_error_injection", 00:05:09.920 "bdev_nvme_get_discovery_info", 00:05:09.920 "bdev_nvme_stop_discovery", 00:05:09.920 "bdev_nvme_start_discovery", 00:05:09.920 "bdev_nvme_get_controller_health_info", 00:05:09.920 "bdev_nvme_disable_controller", 00:05:09.920 "bdev_nvme_enable_controller", 00:05:09.920 "bdev_nvme_reset_controller", 00:05:09.920 "bdev_nvme_get_transport_statistics", 00:05:09.920 "bdev_nvme_apply_firmware", 00:05:09.920 "bdev_nvme_detach_controller", 00:05:09.920 "bdev_nvme_get_controllers", 00:05:09.920 "bdev_nvme_attach_controller", 00:05:09.920 "bdev_nvme_set_hotplug", 00:05:09.920 "bdev_nvme_set_options", 00:05:09.920 "bdev_passthru_delete", 00:05:09.920 "bdev_passthru_create", 00:05:09.920 "bdev_lvol_set_parent_bdev", 00:05:09.920 "bdev_lvol_set_parent", 00:05:09.920 "bdev_lvol_check_shallow_copy", 00:05:09.920 "bdev_lvol_start_shallow_copy", 00:05:09.920 "bdev_lvol_grow_lvstore", 00:05:09.920 "bdev_lvol_get_lvols", 00:05:09.920 "bdev_lvol_get_lvstores", 00:05:09.920 "bdev_lvol_delete", 00:05:09.920 "bdev_lvol_set_read_only", 00:05:09.920 "bdev_lvol_resize", 00:05:09.920 "bdev_lvol_decouple_parent", 00:05:09.920 "bdev_lvol_inflate", 00:05:09.920 "bdev_lvol_rename", 00:05:09.920 "bdev_lvol_clone_bdev", 00:05:09.920 "bdev_lvol_clone", 00:05:09.920 "bdev_lvol_snapshot", 00:05:09.920 "bdev_lvol_create", 00:05:09.920 "bdev_lvol_delete_lvstore", 00:05:09.920 "bdev_lvol_rename_lvstore", 00:05:09.920 "bdev_lvol_create_lvstore", 00:05:09.920 "bdev_raid_set_options", 00:05:09.920 "bdev_raid_remove_base_bdev", 00:05:09.920 "bdev_raid_add_base_bdev", 00:05:09.920 "bdev_raid_delete", 00:05:09.920 "bdev_raid_create", 00:05:09.920 "bdev_raid_get_bdevs", 00:05:09.920 "bdev_error_inject_error", 00:05:09.920 "bdev_error_delete", 00:05:09.920 "bdev_error_create", 00:05:09.920 "bdev_split_delete", 00:05:09.920 "bdev_split_create", 00:05:09.920 "bdev_delay_delete", 00:05:09.920 "bdev_delay_create", 00:05:09.920 "bdev_delay_update_latency", 00:05:09.920 "bdev_zone_block_delete", 00:05:09.920 "bdev_zone_block_create", 00:05:09.920 "blobfs_create", 00:05:09.920 "blobfs_detect", 00:05:09.920 "blobfs_set_cache_size", 00:05:09.920 "bdev_xnvme_delete", 00:05:09.920 "bdev_xnvme_create", 00:05:09.920 "bdev_aio_delete", 00:05:09.920 "bdev_aio_rescan", 00:05:09.920 "bdev_aio_create", 00:05:09.920 "bdev_ftl_set_property", 00:05:09.920 "bdev_ftl_get_properties", 00:05:09.920 "bdev_ftl_get_stats", 00:05:09.920 "bdev_ftl_unmap", 00:05:09.920 "bdev_ftl_unload", 00:05:09.920 "bdev_ftl_delete", 00:05:09.920 "bdev_ftl_load", 00:05:09.920 "bdev_ftl_create", 00:05:09.920 "bdev_virtio_attach_controller", 00:05:09.920 "bdev_virtio_scsi_get_devices", 00:05:09.920 "bdev_virtio_detach_controller", 00:05:09.920 "bdev_virtio_blk_set_hotplug", 00:05:09.920 "bdev_iscsi_delete", 00:05:09.920 "bdev_iscsi_create", 00:05:09.920 "bdev_iscsi_set_options", 00:05:09.920 "accel_error_inject_error", 00:05:09.920 "ioat_scan_accel_module", 00:05:09.920 "dsa_scan_accel_module", 00:05:09.920 "iaa_scan_accel_module", 00:05:09.920 "keyring_file_remove_key", 00:05:09.920 "keyring_file_add_key", 00:05:09.920 "keyring_linux_set_options", 00:05:09.920 "fsdev_aio_delete", 00:05:09.920 "fsdev_aio_create", 00:05:09.920 "iscsi_get_histogram", 00:05:09.920 "iscsi_enable_histogram", 00:05:09.920 "iscsi_set_options", 00:05:09.920 "iscsi_get_auth_groups", 00:05:09.920 "iscsi_auth_group_remove_secret", 00:05:09.920 "iscsi_auth_group_add_secret", 00:05:09.920 "iscsi_delete_auth_group", 00:05:09.920 "iscsi_create_auth_group", 00:05:09.920 "iscsi_set_discovery_auth", 00:05:09.920 "iscsi_get_options", 00:05:09.920 "iscsi_target_node_request_logout", 00:05:09.920 "iscsi_target_node_set_redirect", 00:05:09.920 "iscsi_target_node_set_auth", 00:05:09.920 "iscsi_target_node_add_lun", 00:05:09.920 "iscsi_get_stats", 00:05:09.920 "iscsi_get_connections", 00:05:09.920 "iscsi_portal_group_set_auth", 00:05:09.920 "iscsi_start_portal_group", 00:05:09.921 "iscsi_delete_portal_group", 00:05:09.921 "iscsi_create_portal_group", 00:05:09.921 "iscsi_get_portal_groups", 00:05:09.921 "iscsi_delete_target_node", 00:05:09.921 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.921 "iscsi_target_node_add_pg_ig_maps", 00:05:09.921 "iscsi_create_target_node", 00:05:09.921 "iscsi_get_target_nodes", 00:05:09.921 "iscsi_delete_initiator_group", 00:05:09.921 "iscsi_initiator_group_remove_initiators", 00:05:09.921 "iscsi_initiator_group_add_initiators", 00:05:09.921 "iscsi_create_initiator_group", 00:05:09.921 "iscsi_get_initiator_groups", 00:05:09.921 "nvmf_set_crdt", 00:05:09.921 "nvmf_set_config", 00:05:09.921 "nvmf_set_max_subsystems", 00:05:09.921 "nvmf_stop_mdns_prr", 00:05:09.921 "nvmf_publish_mdns_prr", 00:05:09.921 "nvmf_subsystem_get_listeners", 00:05:09.921 "nvmf_subsystem_get_qpairs", 00:05:09.921 "nvmf_subsystem_get_controllers", 00:05:09.921 "nvmf_get_stats", 00:05:09.921 "nvmf_get_transports", 00:05:09.921 "nvmf_create_transport", 00:05:09.921 "nvmf_get_targets", 00:05:09.921 "nvmf_delete_target", 00:05:09.921 "nvmf_create_target", 00:05:09.921 "nvmf_subsystem_allow_any_host", 00:05:09.921 "nvmf_subsystem_set_keys", 00:05:09.921 "nvmf_subsystem_remove_host", 00:05:09.921 "nvmf_subsystem_add_host", 00:05:09.921 "nvmf_ns_remove_host", 00:05:09.921 "nvmf_ns_add_host", 00:05:09.921 "nvmf_subsystem_remove_ns", 00:05:09.921 "nvmf_subsystem_set_ns_ana_group", 00:05:09.921 "nvmf_subsystem_add_ns", 00:05:09.921 "nvmf_subsystem_listener_set_ana_state", 00:05:09.921 "nvmf_discovery_get_referrals", 00:05:09.921 "nvmf_discovery_remove_referral", 00:05:09.921 "nvmf_discovery_add_referral", 00:05:09.921 "nvmf_subsystem_remove_listener", 00:05:09.921 "nvmf_subsystem_add_listener", 00:05:09.921 "nvmf_delete_subsystem", 00:05:09.921 "nvmf_create_subsystem", 00:05:09.921 "nvmf_get_subsystems", 00:05:09.921 "env_dpdk_get_mem_stats", 00:05:09.921 "nbd_get_disks", 00:05:09.921 "nbd_stop_disk", 00:05:09.921 "nbd_start_disk", 00:05:09.921 "ublk_recover_disk", 00:05:09.921 "ublk_get_disks", 00:05:09.921 "ublk_stop_disk", 00:05:09.921 "ublk_start_disk", 00:05:09.921 "ublk_destroy_target", 00:05:09.921 "ublk_create_target", 00:05:09.921 "virtio_blk_create_transport", 00:05:09.921 "virtio_blk_get_transports", 00:05:09.921 "vhost_controller_set_coalescing", 00:05:09.921 "vhost_get_controllers", 00:05:09.921 "vhost_delete_controller", 00:05:09.921 "vhost_create_blk_controller", 00:05:09.921 "vhost_scsi_controller_remove_target", 00:05:09.921 "vhost_scsi_controller_add_target", 00:05:09.921 "vhost_start_scsi_controller", 00:05:09.921 "vhost_create_scsi_controller", 00:05:09.921 "thread_set_cpumask", 00:05:09.921 "scheduler_set_options", 00:05:09.921 "framework_get_governor", 00:05:09.921 "framework_get_scheduler", 00:05:09.921 "framework_set_scheduler", 00:05:09.921 "framework_get_reactors", 00:05:09.921 "thread_get_io_channels", 00:05:09.921 "thread_get_pollers", 00:05:09.921 "thread_get_stats", 00:05:09.921 "framework_monitor_context_switch", 00:05:09.921 "spdk_kill_instance", 00:05:09.921 "log_enable_timestamps", 00:05:09.921 "log_get_flags", 00:05:09.921 "log_clear_flag", 00:05:09.921 "log_set_flag", 00:05:09.921 "log_get_level", 00:05:09.921 "log_set_level", 00:05:09.921 "log_get_print_level", 00:05:09.921 "log_set_print_level", 00:05:09.921 "framework_enable_cpumask_locks", 00:05:09.921 "framework_disable_cpumask_locks", 00:05:09.921 "framework_wait_init", 00:05:09.921 "framework_start_init", 00:05:09.921 "scsi_get_devices", 00:05:09.921 "bdev_get_histogram", 00:05:09.921 "bdev_enable_histogram", 00:05:09.921 "bdev_set_qos_limit", 00:05:09.921 "bdev_set_qd_sampling_period", 00:05:09.921 "bdev_get_bdevs", 00:05:09.921 "bdev_reset_iostat", 00:05:09.921 "bdev_get_iostat", 00:05:09.921 "bdev_examine", 00:05:09.921 "bdev_wait_for_examine", 00:05:09.921 "bdev_set_options", 00:05:09.921 "accel_get_stats", 00:05:09.921 "accel_set_options", 00:05:09.921 "accel_set_driver", 00:05:09.921 "accel_crypto_key_destroy", 00:05:09.921 "accel_crypto_keys_get", 00:05:09.921 "accel_crypto_key_create", 00:05:09.921 "accel_assign_opc", 00:05:09.921 "accel_get_module_info", 00:05:09.921 "accel_get_opc_assignments", 00:05:09.921 "vmd_rescan", 00:05:09.921 "vmd_remove_device", 00:05:09.921 "vmd_enable", 00:05:09.921 "sock_get_default_impl", 00:05:09.921 "sock_set_default_impl", 00:05:09.921 "sock_impl_set_options", 00:05:09.921 "sock_impl_get_options", 00:05:09.921 "iobuf_get_stats", 00:05:09.921 "iobuf_set_options", 00:05:09.921 "keyring_get_keys", 00:05:09.921 "framework_get_pci_devices", 00:05:09.921 "framework_get_config", 00:05:09.921 "framework_get_subsystems", 00:05:09.921 "fsdev_set_opts", 00:05:09.921 "fsdev_get_opts", 00:05:09.921 "trace_get_info", 00:05:09.921 "trace_get_tpoint_group_mask", 00:05:09.921 "trace_disable_tpoint_group", 00:05:09.921 "trace_enable_tpoint_group", 00:05:09.921 "trace_clear_tpoint_mask", 00:05:09.921 "trace_set_tpoint_mask", 00:05:09.921 "notify_get_notifications", 00:05:09.921 "notify_get_types", 00:05:09.921 "spdk_get_version", 00:05:09.921 "rpc_get_methods" 00:05:09.921 ] 00:05:09.921 19:48:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.921 19:48:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.921 19:48:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58335 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58335 ']' 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58335 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58335 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58335' 00:05:09.921 killing process with pid 58335 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58335 00:05:09.921 19:48:54 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58335 00:05:11.296 00:05:11.296 real 0m2.876s 00:05:11.296 user 0m4.994s 00:05:11.296 sys 0m0.459s 00:05:11.296 19:48:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.296 19:48:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 ************************************ 00:05:11.296 END TEST spdkcli_tcp 00:05:11.296 ************************************ 00:05:11.555 19:48:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.555 19:48:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.555 19:48:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.555 19:48:55 -- common/autotest_common.sh@10 -- # set +x 00:05:11.555 ************************************ 00:05:11.555 START TEST dpdk_mem_utility 00:05:11.555 ************************************ 00:05:11.555 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.555 * Looking for test storage... 00:05:11.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:11.555 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.555 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.555 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.555 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.555 19:48:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.555 19:48:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.555 19:48:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:11.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.556 19:48:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.556 --rc genhtml_branch_coverage=1 00:05:11.556 --rc genhtml_function_coverage=1 00:05:11.556 --rc genhtml_legend=1 00:05:11.556 --rc geninfo_all_blocks=1 00:05:11.556 --rc geninfo_unexecuted_blocks=1 00:05:11.556 00:05:11.556 ' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.556 --rc genhtml_branch_coverage=1 00:05:11.556 --rc genhtml_function_coverage=1 00:05:11.556 --rc genhtml_legend=1 00:05:11.556 --rc geninfo_all_blocks=1 00:05:11.556 --rc geninfo_unexecuted_blocks=1 00:05:11.556 00:05:11.556 ' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.556 --rc genhtml_branch_coverage=1 00:05:11.556 --rc genhtml_function_coverage=1 00:05:11.556 --rc genhtml_legend=1 00:05:11.556 --rc geninfo_all_blocks=1 00:05:11.556 --rc geninfo_unexecuted_blocks=1 00:05:11.556 00:05:11.556 ' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.556 --rc genhtml_branch_coverage=1 00:05:11.556 --rc genhtml_function_coverage=1 00:05:11.556 --rc genhtml_legend=1 00:05:11.556 --rc geninfo_all_blocks=1 00:05:11.556 --rc geninfo_unexecuted_blocks=1 00:05:11.556 00:05:11.556 ' 00:05:11.556 19:48:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.556 19:48:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58441 00:05:11.556 19:48:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58441 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58441 ']' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.556 19:48:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.556 19:48:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.556 [2024-09-30 19:48:55.889752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:11.556 [2024-09-30 19:48:55.889874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58441 ] 00:05:11.815 [2024-09-30 19:48:56.040989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.072 [2024-09-30 19:48:56.222296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.639 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.639 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:12.639 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.639 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.639 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.639 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.639 { 00:05:12.639 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.639 } 00:05:12.639 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.640 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.640 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:12.640 1 heaps totaling size 866.000000 MiB 00:05:12.640 size: 866.000000 MiB heap id: 0 00:05:12.640 end heaps---------- 00:05:12.640 9 mempools totaling size 642.649841 MiB 00:05:12.640 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.640 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.640 size: 92.545471 MiB name: bdev_io_58441 00:05:12.640 size: 51.011292 MiB name: evtpool_58441 00:05:12.640 size: 50.003479 MiB name: msgpool_58441 00:05:12.640 size: 36.509338 MiB name: fsdev_io_58441 00:05:12.640 size: 21.763794 MiB name: PDU_Pool 00:05:12.640 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.640 size: 0.026123 MiB name: Session_Pool 00:05:12.640 end mempools------- 00:05:12.640 6 memzones totaling size 4.142822 MiB 00:05:12.640 size: 1.000366 MiB name: RG_ring_0_58441 00:05:12.640 size: 1.000366 MiB name: RG_ring_1_58441 00:05:12.640 size: 1.000366 MiB name: RG_ring_4_58441 00:05:12.640 size: 1.000366 MiB name: RG_ring_5_58441 00:05:12.640 size: 0.125366 MiB name: RG_ring_2_58441 00:05:12.640 size: 0.015991 MiB name: RG_ring_3_58441 00:05:12.640 end memzones------- 00:05:12.640 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.640 heap id: 0 total size: 866.000000 MiB number of busy elements: 313 number of free elements: 19 00:05:12.640 list of free elements. size: 19.914062 MiB 00:05:12.640 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:12.640 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:12.640 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:12.640 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:12.640 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:12.640 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:12.640 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:12.640 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:12.640 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:12.640 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:12.640 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:12.640 element at address: 0x200000200000 with size: 0.831909 MiB 00:05:12.640 element at address: 0x20001de00000 with size: 0.560974 MiB 00:05:12.640 element at address: 0x200003e00000 with size: 0.490417 MiB 00:05:12.640 element at address: 0x20001c000000 with size: 0.489197 MiB 00:05:12.640 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:12.640 element at address: 0x200015e00000 with size: 0.443237 MiB 00:05:12.640 element at address: 0x20002b200000 with size: 0.391418 MiB 00:05:12.640 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:12.640 list of standard malloc elements. size: 199.287231 MiB 00:05:12.640 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:12.640 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:12.640 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:12.640 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:12.640 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:12.640 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:12.640 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:12.640 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:12.640 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:12.640 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:12.640 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:12.640 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:12.640 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71780 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:12.641 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8f9c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b264340 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b264440 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b100 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:12.641 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:12.642 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:12.642 list of memzone associated elements. size: 646.798706 MiB 00:05:12.642 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:12.642 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.642 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:12.642 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.642 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:12.642 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58441_0 00:05:12.642 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:12.642 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58441_0 00:05:12.642 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:12.642 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58441_0 00:05:12.642 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:12.642 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58441_0 00:05:12.642 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:12.642 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.642 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:12.642 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.642 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:12.642 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58441 00:05:12.642 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:12.642 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58441 00:05:12.642 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:12.642 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58441 00:05:12.642 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:12.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.642 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:12.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.642 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:12.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.642 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:12.642 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.642 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:12.642 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58441 00:05:12.642 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:12.642 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58441 00:05:12.642 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:12.642 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58441 00:05:12.642 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:12.642 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58441 00:05:12.642 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:12.642 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58441 00:05:12.642 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:12.642 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58441 00:05:12.642 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:12.642 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.642 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:12.642 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.642 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:12.642 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.642 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:12.642 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58441 00:05:12.642 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:12.642 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.642 element at address: 0x20002b264540 with size: 0.023804 MiB 00:05:12.642 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.642 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:12.642 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58441 00:05:12.642 element at address: 0x20002b26a6c0 with size: 0.002502 MiB 00:05:12.642 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.642 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:05:12.642 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58441 00:05:12.642 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:12.642 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58441 00:05:12.642 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:12.642 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58441 00:05:12.642 element at address: 0x20002b26b200 with size: 0.000366 MiB 00:05:12.642 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.642 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.642 19:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58441 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58441 ']' 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58441 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58441 00:05:12.642 killing process with pid 58441 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58441' 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58441 00:05:12.642 19:48:56 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58441 00:05:14.545 00:05:14.545 real 0m2.872s 00:05:14.545 user 0m2.882s 00:05:14.545 sys 0m0.400s 00:05:14.545 19:48:58 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.545 19:48:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 ************************************ 00:05:14.545 END TEST dpdk_mem_utility 00:05:14.545 ************************************ 00:05:14.545 19:48:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.545 19:48:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.545 19:48:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.545 19:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 ************************************ 00:05:14.545 START TEST event 00:05:14.545 ************************************ 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.545 * Looking for test storage... 00:05:14.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.545 19:48:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.545 19:48:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.545 19:48:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.545 19:48:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.545 19:48:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.545 19:48:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.545 19:48:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.545 19:48:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.545 19:48:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.545 19:48:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.545 19:48:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.545 19:48:58 event -- scripts/common.sh@344 -- # case "$op" in 00:05:14.545 19:48:58 event -- scripts/common.sh@345 -- # : 1 00:05:14.545 19:48:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.545 19:48:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.545 19:48:58 event -- scripts/common.sh@365 -- # decimal 1 00:05:14.545 19:48:58 event -- scripts/common.sh@353 -- # local d=1 00:05:14.545 19:48:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.545 19:48:58 event -- scripts/common.sh@355 -- # echo 1 00:05:14.545 19:48:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.545 19:48:58 event -- scripts/common.sh@366 -- # decimal 2 00:05:14.545 19:48:58 event -- scripts/common.sh@353 -- # local d=2 00:05:14.545 19:48:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.545 19:48:58 event -- scripts/common.sh@355 -- # echo 2 00:05:14.545 19:48:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.545 19:48:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.545 19:48:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.545 19:48:58 event -- scripts/common.sh@368 -- # return 0 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.545 --rc genhtml_branch_coverage=1 00:05:14.545 --rc genhtml_function_coverage=1 00:05:14.545 --rc genhtml_legend=1 00:05:14.545 --rc geninfo_all_blocks=1 00:05:14.545 --rc geninfo_unexecuted_blocks=1 00:05:14.545 00:05:14.545 ' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.545 --rc genhtml_branch_coverage=1 00:05:14.545 --rc genhtml_function_coverage=1 00:05:14.545 --rc genhtml_legend=1 00:05:14.545 --rc geninfo_all_blocks=1 00:05:14.545 --rc geninfo_unexecuted_blocks=1 00:05:14.545 00:05:14.545 ' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.545 --rc genhtml_branch_coverage=1 00:05:14.545 --rc genhtml_function_coverage=1 00:05:14.545 --rc genhtml_legend=1 00:05:14.545 --rc geninfo_all_blocks=1 00:05:14.545 --rc geninfo_unexecuted_blocks=1 00:05:14.545 00:05:14.545 ' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.545 --rc genhtml_branch_coverage=1 00:05:14.545 --rc genhtml_function_coverage=1 00:05:14.545 --rc genhtml_legend=1 00:05:14.545 --rc geninfo_all_blocks=1 00:05:14.545 --rc geninfo_unexecuted_blocks=1 00:05:14.545 00:05:14.545 ' 00:05:14.545 19:48:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:14.545 19:48:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.545 19:48:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:14.545 19:48:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.545 19:48:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 ************************************ 00:05:14.545 START TEST event_perf 00:05:14.545 ************************************ 00:05:14.545 19:48:58 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.545 Running I/O for 1 seconds...[2024-09-30 19:48:58.801106] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:14.545 [2024-09-30 19:48:58.801327] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:05:14.803 [2024-09-30 19:48:58.953222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.803 [2024-09-30 19:48:59.134380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.803 [2024-09-30 19:48:59.134624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.803 [2024-09-30 19:48:59.135313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.803 [2024-09-30 19:48:59.135341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.219 Running I/O for 1 seconds... 00:05:16.219 lcore 0: 188329 00:05:16.219 lcore 1: 188330 00:05:16.219 lcore 2: 188329 00:05:16.219 lcore 3: 188328 00:05:16.219 done. 00:05:16.219 00:05:16.219 real 0m1.639s 00:05:16.219 user 0m4.421s 00:05:16.219 sys 0m0.099s 00:05:16.219 19:49:00 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.219 ************************************ 00:05:16.219 END TEST event_perf 00:05:16.219 ************************************ 00:05:16.219 19:49:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.219 19:49:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.219 19:49:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:16.219 19:49:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.219 19:49:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.219 ************************************ 00:05:16.219 START TEST event_reactor 00:05:16.219 ************************************ 00:05:16.219 19:49:00 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.219 [2024-09-30 19:49:00.498090] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:16.219 [2024-09-30 19:49:00.498199] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58578 ] 00:05:16.476 [2024-09-30 19:49:00.648394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.476 [2024-09-30 19:49:00.825552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.845 test_start 00:05:17.845 oneshot 00:05:17.845 tick 100 00:05:17.845 tick 100 00:05:17.845 tick 250 00:05:17.845 tick 100 00:05:17.845 tick 100 00:05:17.845 tick 100 00:05:17.845 tick 250 00:05:17.845 tick 500 00:05:17.845 tick 100 00:05:17.845 tick 100 00:05:17.845 tick 250 00:05:17.845 tick 100 00:05:17.845 tick 100 00:05:17.845 test_end 00:05:17.845 ************************************ 00:05:17.845 END TEST event_reactor 00:05:17.845 ************************************ 00:05:17.845 00:05:17.845 real 0m1.621s 00:05:17.845 user 0m1.445s 00:05:17.845 sys 0m0.066s 00:05:17.845 19:49:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.845 19:49:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:17.845 19:49:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.845 19:49:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.845 19:49:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.845 19:49:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.845 ************************************ 00:05:17.845 START TEST event_reactor_perf 00:05:17.845 ************************************ 00:05:17.845 19:49:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.845 [2024-09-30 19:49:02.179380] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.845 [2024-09-30 19:49:02.179469] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58620 ] 00:05:18.102 [2024-09-30 19:49:02.329332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.360 [2024-09-30 19:49:02.505150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.735 test_start 00:05:19.735 test_end 00:05:19.735 Performance: 333595 events per second 00:05:19.735 00:05:19.735 real 0m1.564s 00:05:19.735 user 0m1.374s 00:05:19.735 sys 0m0.081s 00:05:19.735 19:49:03 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.735 ************************************ 00:05:19.735 19:49:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.735 END TEST event_reactor_perf 00:05:19.735 ************************************ 00:05:19.735 19:49:03 event -- event/event.sh@49 -- # uname -s 00:05:19.735 19:49:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.735 19:49:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.735 19:49:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.735 19:49:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.735 19:49:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.735 ************************************ 00:05:19.735 START TEST event_scheduler 00:05:19.735 ************************************ 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.735 * Looking for test storage... 00:05:19.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.735 19:49:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.735 --rc genhtml_branch_coverage=1 00:05:19.735 --rc genhtml_function_coverage=1 00:05:19.735 --rc genhtml_legend=1 00:05:19.735 --rc geninfo_all_blocks=1 00:05:19.735 --rc geninfo_unexecuted_blocks=1 00:05:19.735 00:05:19.735 ' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.735 --rc genhtml_branch_coverage=1 00:05:19.735 --rc genhtml_function_coverage=1 00:05:19.735 --rc genhtml_legend=1 00:05:19.735 --rc geninfo_all_blocks=1 00:05:19.735 --rc geninfo_unexecuted_blocks=1 00:05:19.735 00:05:19.735 ' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.735 --rc genhtml_branch_coverage=1 00:05:19.735 --rc genhtml_function_coverage=1 00:05:19.735 --rc genhtml_legend=1 00:05:19.735 --rc geninfo_all_blocks=1 00:05:19.735 --rc geninfo_unexecuted_blocks=1 00:05:19.735 00:05:19.735 ' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.735 --rc genhtml_branch_coverage=1 00:05:19.735 --rc genhtml_function_coverage=1 00:05:19.735 --rc genhtml_legend=1 00:05:19.735 --rc geninfo_all_blocks=1 00:05:19.735 --rc geninfo_unexecuted_blocks=1 00:05:19.735 00:05:19.735 ' 00:05:19.735 19:49:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.735 19:49:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58690 00:05:19.735 19:49:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.735 19:49:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58690 00:05:19.735 19:49:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58690 ']' 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.735 19:49:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.735 [2024-09-30 19:49:03.981281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:19.735 [2024-09-30 19:49:03.981566] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58690 ] 00:05:19.993 [2024-09-30 19:49:04.135680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.993 [2024-09-30 19:49:04.318900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.993 [2024-09-30 19:49:04.319192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.993 [2024-09-30 19:49:04.319614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.993 [2024-09-30 19:49:04.319648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:20.560 19:49:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.560 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.560 POWER: Cannot set governor of lcore 0 to performance 00:05:20.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.560 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.560 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.560 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:20.560 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:20.560 POWER: Unable to set Power Management Environment for lcore 0 00:05:20.560 [2024-09-30 19:49:04.828949] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:20.560 [2024-09-30 19:49:04.828969] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:20.560 [2024-09-30 19:49:04.828978] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.560 [2024-09-30 19:49:04.828993] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.560 [2024-09-30 19:49:04.829001] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.560 [2024-09-30 19:49:04.829011] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.560 19:49:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.560 19:49:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 [2024-09-30 19:49:05.051410] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.818 19:49:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.818 19:49:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.818 19:49:05 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.818 19:49:05 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 ************************************ 00:05:20.818 START TEST scheduler_create_thread 00:05:20.818 ************************************ 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 2 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 3 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 4 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 5 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.818 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.818 6 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 7 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 8 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 9 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 10 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.819 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.385 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.385 19:49:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.385 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.385 19:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.826 19:49:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.826 19:49:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.826 19:49:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.826 19:49:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.826 19:49:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.202 ************************************ 00:05:24.202 END TEST scheduler_create_thread 00:05:24.202 ************************************ 00:05:24.202 19:49:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.202 00:05:24.202 real 0m3.097s 00:05:24.202 user 0m0.014s 00:05:24.202 sys 0m0.007s 00:05:24.202 19:49:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.202 19:49:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.202 19:49:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.202 19:49:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58690 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58690 ']' 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58690 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58690 00:05:24.202 killing process with pid 58690 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58690' 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58690 00:05:24.202 19:49:08 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58690 00:05:24.202 [2024-09-30 19:49:08.541878] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.136 00:05:25.136 real 0m5.429s 00:05:25.136 user 0m10.166s 00:05:25.136 sys 0m0.330s 00:05:25.136 19:49:09 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.136 19:49:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.136 ************************************ 00:05:25.136 END TEST event_scheduler 00:05:25.136 ************************************ 00:05:25.137 19:49:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.137 19:49:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.137 19:49:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.137 19:49:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.137 19:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.137 ************************************ 00:05:25.137 START TEST app_repeat 00:05:25.137 ************************************ 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58802 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.137 Process app_repeat pid: 58802 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58802' 00:05:25.137 spdk_app_start Round 0 00:05:25.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.137 19:49:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58802 /var/tmp/spdk-nbd.sock 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58802 ']' 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.137 19:49:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.137 [2024-09-30 19:49:09.302825] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:25.137 [2024-09-30 19:49:09.302909] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58802 ] 00:05:25.137 [2024-09-30 19:49:09.447521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.394 [2024-09-30 19:49:09.624557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.394 [2024-09-30 19:49:09.624636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.961 19:49:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.961 19:49:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.961 19:49:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.219 Malloc0 00:05:26.219 19:49:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.477 Malloc1 00:05:26.477 19:49:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.477 19:49:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.735 /dev/nbd0 00:05:26.735 19:49:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.735 19:49:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.735 1+0 records in 00:05:26.735 1+0 records out 00:05:26.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253316 s, 16.2 MB/s 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.735 19:49:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.735 19:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.735 19:49:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.735 19:49:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.735 /dev/nbd1 00:05:26.735 19:49:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.735 19:49:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.735 19:49:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:26.735 19:49:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:26.735 19:49:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:26.735 19:49:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:26.735 19:49:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:26.736 19:49:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:26.736 19:49:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:26.736 19:49:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:26.736 19:49:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.736 1+0 records in 00:05:26.736 1+0 records out 00:05:26.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277636 s, 14.8 MB/s 00:05:26.994 19:49:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.994 19:49:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.994 19:49:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.994 19:49:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.994 19:49:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.994 { 00:05:26.994 "nbd_device": "/dev/nbd0", 00:05:26.994 "bdev_name": "Malloc0" 00:05:26.994 }, 00:05:26.994 { 00:05:26.994 "nbd_device": "/dev/nbd1", 00:05:26.994 "bdev_name": "Malloc1" 00:05:26.994 } 00:05:26.994 ]' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.994 { 00:05:26.994 "nbd_device": "/dev/nbd0", 00:05:26.994 "bdev_name": "Malloc0" 00:05:26.994 }, 00:05:26.994 { 00:05:26.994 "nbd_device": "/dev/nbd1", 00:05:26.994 "bdev_name": "Malloc1" 00:05:26.994 } 00:05:26.994 ]' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.994 /dev/nbd1' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.994 /dev/nbd1' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.994 256+0 records in 00:05:26.994 256+0 records out 00:05:26.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555121 s, 189 MB/s 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.994 19:49:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.253 256+0 records in 00:05:27.253 256+0 records out 00:05:27.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212816 s, 49.3 MB/s 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.253 256+0 records in 00:05:27.253 256+0 records out 00:05:27.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193556 s, 54.2 MB/s 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.253 19:49:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.512 19:49:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.771 19:49:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.771 19:49:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.029 19:49:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.964 [2024-09-30 19:49:12.998914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.964 [2024-09-30 19:49:13.131510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.964 [2024-09-30 19:49:13.131614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.964 [2024-09-30 19:49:13.234571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.964 [2024-09-30 19:49:13.234612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.496 spdk_app_start Round 1 00:05:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.496 19:49:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.496 19:49:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.496 19:49:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58802 /var/tmp/spdk-nbd.sock 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58802 ']' 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.496 19:49:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:31.496 19:49:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.496 Malloc0 00:05:31.496 19:49:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.755 Malloc1 00:05:31.755 19:49:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.755 19:49:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.013 /dev/nbd0 00:05:32.013 19:49:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.013 19:49:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.013 19:49:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.014 1+0 records in 00:05:32.014 1+0 records out 00:05:32.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235849 s, 17.4 MB/s 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.014 19:49:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.014 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.014 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.014 19:49:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.272 /dev/nbd1 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.272 1+0 records in 00:05:32.272 1+0 records out 00:05:32.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223217 s, 18.3 MB/s 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.272 19:49:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.272 19:49:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.530 { 00:05:32.530 "nbd_device": "/dev/nbd0", 00:05:32.530 "bdev_name": "Malloc0" 00:05:32.530 }, 00:05:32.530 { 00:05:32.530 "nbd_device": "/dev/nbd1", 00:05:32.530 "bdev_name": "Malloc1" 00:05:32.530 } 00:05:32.530 ]' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.530 { 00:05:32.530 "nbd_device": "/dev/nbd0", 00:05:32.530 "bdev_name": "Malloc0" 00:05:32.530 }, 00:05:32.530 { 00:05:32.530 "nbd_device": "/dev/nbd1", 00:05:32.530 "bdev_name": "Malloc1" 00:05:32.530 } 00:05:32.530 ]' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.530 /dev/nbd1' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.530 /dev/nbd1' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.530 19:49:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.531 256+0 records in 00:05:32.531 256+0 records out 00:05:32.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671721 s, 156 MB/s 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.531 256+0 records in 00:05:32.531 256+0 records out 00:05:32.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148431 s, 70.6 MB/s 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.531 256+0 records in 00:05:32.531 256+0 records out 00:05:32.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013614 s, 77.0 MB/s 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.531 19:49:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.789 19:49:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.047 19:49:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.304 19:49:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.304 19:49:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.562 19:49:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.127 [2024-09-30 19:49:18.436222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.385 [2024-09-30 19:49:18.566450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.385 [2024-09-30 19:49:18.566542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.385 [2024-09-30 19:49:18.668948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.385 [2024-09-30 19:49:18.669005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.915 spdk_app_start Round 2 00:05:36.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.915 19:49:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.915 19:49:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.915 19:49:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58802 /var/tmp/spdk-nbd.sock 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58802 ']' 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.915 19:49:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 19:49:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.915 19:49:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.915 19:49:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.915 Malloc0 00:05:36.915 19:49:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.174 Malloc1 00:05:37.174 19:49:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.174 19:49:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.433 /dev/nbd0 00:05:37.433 19:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.433 19:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.433 1+0 records in 00:05:37.433 1+0 records out 00:05:37.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216651 s, 18.9 MB/s 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.433 19:49:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.433 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.433 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.433 19:49:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.692 /dev/nbd1 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.692 1+0 records in 00:05:37.692 1+0 records out 00:05:37.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161038 s, 25.4 MB/s 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.692 19:49:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.692 19:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.951 { 00:05:37.951 "nbd_device": "/dev/nbd0", 00:05:37.951 "bdev_name": "Malloc0" 00:05:37.951 }, 00:05:37.951 { 00:05:37.951 "nbd_device": "/dev/nbd1", 00:05:37.951 "bdev_name": "Malloc1" 00:05:37.951 } 00:05:37.951 ]' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.951 { 00:05:37.951 "nbd_device": "/dev/nbd0", 00:05:37.951 "bdev_name": "Malloc0" 00:05:37.951 }, 00:05:37.951 { 00:05:37.951 "nbd_device": "/dev/nbd1", 00:05:37.951 "bdev_name": "Malloc1" 00:05:37.951 } 00:05:37.951 ]' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.951 /dev/nbd1' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.951 /dev/nbd1' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.951 256+0 records in 00:05:37.951 256+0 records out 00:05:37.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732193 s, 143 MB/s 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.951 256+0 records in 00:05:37.951 256+0 records out 00:05:37.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170821 s, 61.4 MB/s 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.951 256+0 records in 00:05:37.951 256+0 records out 00:05:37.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183843 s, 57.0 MB/s 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.951 19:49:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.210 19:49:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.468 19:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.727 19:49:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.727 19:49:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.985 19:49:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.552 [2024-09-30 19:49:23.824681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.827 [2024-09-30 19:49:23.953158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.827 [2024-09-30 19:49:23.953167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.827 [2024-09-30 19:49:24.049215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.827 [2024-09-30 19:49:24.049263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.354 19:49:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58802 /var/tmp/spdk-nbd.sock 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58802 ']' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:42.354 19:49:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58802 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58802 ']' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58802 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58802 00:05:42.354 killing process with pid 58802 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58802' 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58802 00:05:42.354 19:49:26 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58802 00:05:42.919 spdk_app_start is called in Round 0. 00:05:42.919 Shutdown signal received, stop current app iteration 00:05:42.919 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.919 spdk_app_start is called in Round 1. 00:05:42.919 Shutdown signal received, stop current app iteration 00:05:42.919 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.919 spdk_app_start is called in Round 2. 00:05:42.919 Shutdown signal received, stop current app iteration 00:05:42.919 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.919 spdk_app_start is called in Round 3. 00:05:42.919 Shutdown signal received, stop current app iteration 00:05:42.919 ************************************ 00:05:42.919 END TEST app_repeat 00:05:42.919 ************************************ 00:05:42.919 19:49:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.919 19:49:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.919 00:05:42.919 real 0m17.739s 00:05:42.919 user 0m38.330s 00:05:42.919 sys 0m2.085s 00:05:42.919 19:49:27 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.919 19:49:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.919 19:49:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.919 19:49:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.919 19:49:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.919 19:49:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.919 19:49:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.919 ************************************ 00:05:42.919 START TEST cpu_locks 00:05:42.919 ************************************ 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.919 * Looking for test storage... 00:05:42.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.919 19:49:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.919 --rc genhtml_branch_coverage=1 00:05:42.919 --rc genhtml_function_coverage=1 00:05:42.919 --rc genhtml_legend=1 00:05:42.919 --rc geninfo_all_blocks=1 00:05:42.919 --rc geninfo_unexecuted_blocks=1 00:05:42.919 00:05:42.919 ' 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.919 --rc genhtml_branch_coverage=1 00:05:42.919 --rc genhtml_function_coverage=1 00:05:42.919 --rc genhtml_legend=1 00:05:42.919 --rc geninfo_all_blocks=1 00:05:42.919 --rc geninfo_unexecuted_blocks=1 00:05:42.919 00:05:42.919 ' 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.919 --rc genhtml_branch_coverage=1 00:05:42.919 --rc genhtml_function_coverage=1 00:05:42.919 --rc genhtml_legend=1 00:05:42.919 --rc geninfo_all_blocks=1 00:05:42.919 --rc geninfo_unexecuted_blocks=1 00:05:42.919 00:05:42.919 ' 00:05:42.919 19:49:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.919 --rc genhtml_branch_coverage=1 00:05:42.919 --rc genhtml_function_coverage=1 00:05:42.919 --rc genhtml_legend=1 00:05:42.919 --rc geninfo_all_blocks=1 00:05:42.919 --rc geninfo_unexecuted_blocks=1 00:05:42.919 00:05:42.919 ' 00:05:42.920 19:49:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.920 19:49:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.920 19:49:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.920 19:49:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.920 19:49:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.920 19:49:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.920 19:49:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.920 ************************************ 00:05:42.920 START TEST default_locks 00:05:42.920 ************************************ 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59227 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59227 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59227 ']' 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.920 19:49:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.178 [2024-09-30 19:49:27.289126] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:43.178 [2024-09-30 19:49:27.289395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59227 ] 00:05:43.178 [2024-09-30 19:49:27.436310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.436 [2024-09-30 19:49:27.576651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.002 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.002 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:44.002 19:49:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59227 00:05:44.002 19:49:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.002 19:49:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59227 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59227 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59227 ']' 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59227 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59227 00:05:44.260 killing process with pid 59227 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59227' 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59227 00:05:44.260 19:49:28 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59227 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59227 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59227 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.640 ERROR: process (pid: 59227) is no longer running 00:05:45.640 ************************************ 00:05:45.640 END TEST default_locks 00:05:45.640 ************************************ 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59227 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59227 ']' 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59227) - No such process 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.640 00:05:45.640 real 0m2.465s 00:05:45.640 user 0m2.494s 00:05:45.640 sys 0m0.457s 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.640 19:49:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 19:49:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.640 19:49:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.640 19:49:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.640 19:49:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 ************************************ 00:05:45.640 START TEST default_locks_via_rpc 00:05:45.640 ************************************ 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59291 00:05:45.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59291 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59291 ']' 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.640 19:49:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 [2024-09-30 19:49:29.806648] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:45.640 [2024-09-30 19:49:29.806761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:05:45.640 [2024-09-30 19:49:29.953211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.899 [2024-09-30 19:49:30.115437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59291 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59291 00:05:46.465 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59291 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59291 ']' 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59291 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59291 00:05:46.724 killing process with pid 59291 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59291' 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59291 00:05:46.724 19:49:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59291 00:05:48.099 00:05:48.099 real 0m2.380s 00:05:48.099 user 0m2.387s 00:05:48.099 sys 0m0.437s 00:05:48.099 19:49:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.099 ************************************ 00:05:48.099 END TEST default_locks_via_rpc 00:05:48.099 ************************************ 00:05:48.099 19:49:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.099 19:49:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.099 19:49:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.099 19:49:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.099 19:49:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.099 ************************************ 00:05:48.099 START TEST non_locking_app_on_locked_coremask 00:05:48.099 ************************************ 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59343 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59343 /var/tmp/spdk.sock 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59343 ']' 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.099 19:49:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.099 [2024-09-30 19:49:32.248542] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.099 [2024-09-30 19:49:32.248663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59343 ] 00:05:48.099 [2024-09-30 19:49:32.398479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.357 [2024-09-30 19:49:32.539081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59359 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59359 /var/tmp/spdk2.sock 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59359 ']' 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.925 19:49:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.925 [2024-09-30 19:49:33.146613] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.925 [2024-09-30 19:49:33.147065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59359 ] 00:05:49.205 [2024-09-30 19:49:33.292653] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.205 [2024-09-30 19:49:33.292692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.479 [2024-09-30 19:49:33.580315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.415 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.415 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.415 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59343 00:05:50.415 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.415 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59343 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59343 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59343 ']' 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59343 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.673 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59343 00:05:50.674 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.674 killing process with pid 59343 00:05:50.674 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.674 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59343' 00:05:50.674 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59343 00:05:50.674 19:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59343 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59359 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59359 ']' 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59359 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59359 00:05:53.204 killing process with pid 59359 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59359' 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59359 00:05:53.204 19:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59359 00:05:54.578 ************************************ 00:05:54.578 END TEST non_locking_app_on_locked_coremask 00:05:54.578 ************************************ 00:05:54.578 00:05:54.578 real 0m6.447s 00:05:54.578 user 0m6.698s 00:05:54.578 sys 0m0.844s 00:05:54.578 19:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.578 19:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.578 19:49:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.578 19:49:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.578 19:49:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.578 19:49:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.578 ************************************ 00:05:54.578 START TEST locking_app_on_unlocked_coremask 00:05:54.578 ************************************ 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:54.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59450 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59450 /var/tmp/spdk.sock 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59450 ']' 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.578 19:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.578 [2024-09-30 19:49:38.754639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:54.578 [2024-09-30 19:49:38.754753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59450 ] 00:05:54.578 [2024-09-30 19:49:38.902584] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.578 [2024-09-30 19:49:38.902713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.835 [2024-09-30 19:49:39.046699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59466 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59466 /var/tmp/spdk2.sock 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59466 ']' 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.404 19:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.404 [2024-09-30 19:49:39.708921] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:55.404 [2024-09-30 19:49:39.709220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:05:55.663 [2024-09-30 19:49:39.857402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.922 [2024-09-30 19:49:40.154113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.856 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.856 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.856 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59466 00:05:56.856 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59466 00:05:56.856 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59450 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59450 ']' 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59450 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59450 00:05:57.113 killing process with pid 59450 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59450' 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59450 00:05:57.113 19:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59450 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59466 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59466 ']' 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59466 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59466 00:05:59.689 killing process with pid 59466 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59466' 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59466 00:05:59.689 19:49:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59466 00:06:01.065 ************************************ 00:06:01.065 END TEST locking_app_on_unlocked_coremask 00:06:01.065 ************************************ 00:06:01.065 00:06:01.065 real 0m6.433s 00:06:01.065 user 0m6.739s 00:06:01.065 sys 0m0.829s 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.065 19:49:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.065 19:49:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.065 19:49:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.065 19:49:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.065 ************************************ 00:06:01.065 START TEST locking_app_on_locked_coremask 00:06:01.065 ************************************ 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59563 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59563 /var/tmp/spdk.sock 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59563 ']' 00:06:01.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.065 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.066 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.066 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.066 19:49:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.066 [2024-09-30 19:49:45.244540] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.066 [2024-09-30 19:49:45.244672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:06:01.066 [2024-09-30 19:49:45.394837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.322 [2024-09-30 19:49:45.540108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59579 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59579 /var/tmp/spdk2.sock 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59579 /var/tmp/spdk2.sock 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:01.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59579 /var/tmp/spdk2.sock 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59579 ']' 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.889 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.889 [2024-09-30 19:49:46.146507] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:01.889 [2024-09-30 19:49:46.146645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59579 ] 00:06:02.148 [2024-09-30 19:49:46.296467] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59563 has claimed it. 00:06:02.148 [2024-09-30 19:49:46.296520] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.406 ERROR: process (pid: 59579) is no longer running 00:06:02.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59579) - No such process 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.406 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59563 00:06:02.664 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59563 00:06:02.664 19:49:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59563 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59563 ']' 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59563 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59563 00:06:02.922 killing process with pid 59563 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59563' 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59563 00:06:02.922 19:49:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59563 00:06:04.295 ************************************ 00:06:04.295 END TEST locking_app_on_locked_coremask 00:06:04.295 ************************************ 00:06:04.295 00:06:04.295 real 0m3.248s 00:06:04.295 user 0m3.466s 00:06:04.295 sys 0m0.624s 00:06:04.295 19:49:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.295 19:49:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.295 19:49:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.295 19:49:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.295 19:49:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.295 19:49:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.295 ************************************ 00:06:04.295 START TEST locking_overlapped_coremask 00:06:04.295 ************************************ 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59637 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59637 /var/tmp/spdk.sock 00:06:04.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59637 ']' 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.295 19:49:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.295 [2024-09-30 19:49:48.552118] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:04.295 [2024-09-30 19:49:48.552431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:06:04.553 [2024-09-30 19:49:48.698009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.553 [2024-09-30 19:49:48.887742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.553 [2024-09-30 19:49:48.888026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.553 [2024-09-30 19:49:48.888063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59655 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59655 /var/tmp/spdk2.sock 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59655 /var/tmp/spdk2.sock 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59655 /var/tmp/spdk2.sock 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59655 ']' 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.120 19:49:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.377 [2024-09-30 19:49:49.550910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.377 [2024-09-30 19:49:49.551614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:06:05.377 [2024-09-30 19:49:49.700810] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59637 has claimed it. 00:06:05.377 [2024-09-30 19:49:49.700857] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.943 ERROR: process (pid: 59655) is no longer running 00:06:05.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59655) - No such process 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59637 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59637 ']' 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59637 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59637 00:06:05.943 killing process with pid 59637 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59637' 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59637 00:06:05.943 19:49:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59637 00:06:07.323 00:06:07.323 real 0m3.023s 00:06:07.323 user 0m7.929s 00:06:07.323 sys 0m0.439s 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.323 ************************************ 00:06:07.323 END TEST locking_overlapped_coremask 00:06:07.323 ************************************ 00:06:07.323 19:49:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.323 19:49:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.323 19:49:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.323 19:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.323 ************************************ 00:06:07.323 START TEST locking_overlapped_coremask_via_rpc 00:06:07.323 ************************************ 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59708 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59708 /var/tmp/spdk.sock 00:06:07.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59708 ']' 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.323 19:49:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.323 [2024-09-30 19:49:51.635403] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.323 [2024-09-30 19:49:51.635527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:06:07.582 [2024-09-30 19:49:51.783363] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.582 [2024-09-30 19:49:51.783415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.582 [2024-09-30 19:49:51.936864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.582 [2024-09-30 19:49:51.937223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.582 [2024-09-30 19:49:51.937226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59725 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59725 /var/tmp/spdk2.sock 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59725 ']' 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.148 19:49:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.406 [2024-09-30 19:49:52.541253] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.406 [2024-09-30 19:49:52.541382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:06:08.406 [2024-09-30 19:49:52.691371] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.406 [2024-09-30 19:49:52.691424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.665 [2024-09-30 19:49:52.996372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.665 [2024-09-30 19:49:52.999464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.665 [2024-09-30 19:49:52.999496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.600 [2024-09-30 19:49:53.939449] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59708 has claimed it. 00:06:09.600 request: 00:06:09.600 { 00:06:09.600 "method": "framework_enable_cpumask_locks", 00:06:09.600 "req_id": 1 00:06:09.600 } 00:06:09.600 Got JSON-RPC error response 00:06:09.600 response: 00:06:09.600 { 00:06:09.600 "code": -32603, 00:06:09.600 "message": "Failed to claim CPU core: 2" 00:06:09.600 } 00:06:09.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59708 /var/tmp/spdk.sock 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59708 ']' 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.600 19:49:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.858 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59725 /var/tmp/spdk2.sock 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59725 ']' 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.859 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 ************************************ 00:06:10.117 END TEST locking_overlapped_coremask_via_rpc 00:06:10.117 ************************************ 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.117 00:06:10.117 real 0m2.818s 00:06:10.117 user 0m1.082s 00:06:10.117 sys 0m0.141s 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.117 19:49:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.117 19:49:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.117 19:49:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59708 ]] 00:06:10.117 19:49:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59708 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59708 ']' 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59708 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59708 00:06:10.118 killing process with pid 59708 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59708' 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59708 00:06:10.118 19:49:54 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59708 00:06:11.494 19:49:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59725 ]] 00:06:11.494 19:49:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59725 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59725 ']' 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59725 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59725 00:06:11.494 killing process with pid 59725 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59725' 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59725 00:06:11.494 19:49:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59725 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59708 ]] 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59708 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59708 ']' 00:06:12.868 Process with pid 59708 is not found 00:06:12.868 Process with pid 59725 is not found 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59708 00:06:12.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59708) - No such process 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59708 is not found' 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59725 ]] 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59725 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59725 ']' 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59725 00:06:12.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59725) - No such process 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59725 is not found' 00:06:12.868 19:49:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.868 ************************************ 00:06:12.868 END TEST cpu_locks 00:06:12.868 ************************************ 00:06:12.868 00:06:12.868 real 0m29.950s 00:06:12.868 user 0m50.597s 00:06:12.868 sys 0m4.602s 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.868 19:49:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.868 ************************************ 00:06:12.868 END TEST event 00:06:12.868 ************************************ 00:06:12.868 00:06:12.868 real 0m58.437s 00:06:12.868 user 1m46.488s 00:06:12.868 sys 0m7.515s 00:06:12.868 19:49:57 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.868 19:49:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.868 19:49:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.868 19:49:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.868 19:49:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.868 19:49:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.868 ************************************ 00:06:12.868 START TEST thread 00:06:12.869 ************************************ 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.869 * Looking for test storage... 00:06:12.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.869 19:49:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.869 19:49:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.869 19:49:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.869 19:49:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.869 19:49:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.869 19:49:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.869 19:49:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.869 19:49:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.869 19:49:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.869 19:49:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.869 19:49:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.869 19:49:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:12.869 19:49:57 thread -- scripts/common.sh@345 -- # : 1 00:06:12.869 19:49:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.869 19:49:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.869 19:49:57 thread -- scripts/common.sh@365 -- # decimal 1 00:06:12.869 19:49:57 thread -- scripts/common.sh@353 -- # local d=1 00:06:12.869 19:49:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.869 19:49:57 thread -- scripts/common.sh@355 -- # echo 1 00:06:12.869 19:49:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.869 19:49:57 thread -- scripts/common.sh@366 -- # decimal 2 00:06:12.869 19:49:57 thread -- scripts/common.sh@353 -- # local d=2 00:06:12.869 19:49:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.869 19:49:57 thread -- scripts/common.sh@355 -- # echo 2 00:06:12.869 19:49:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.869 19:49:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.869 19:49:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.869 19:49:57 thread -- scripts/common.sh@368 -- # return 0 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.869 --rc genhtml_branch_coverage=1 00:06:12.869 --rc genhtml_function_coverage=1 00:06:12.869 --rc genhtml_legend=1 00:06:12.869 --rc geninfo_all_blocks=1 00:06:12.869 --rc geninfo_unexecuted_blocks=1 00:06:12.869 00:06:12.869 ' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.869 --rc genhtml_branch_coverage=1 00:06:12.869 --rc genhtml_function_coverage=1 00:06:12.869 --rc genhtml_legend=1 00:06:12.869 --rc geninfo_all_blocks=1 00:06:12.869 --rc geninfo_unexecuted_blocks=1 00:06:12.869 00:06:12.869 ' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.869 --rc genhtml_branch_coverage=1 00:06:12.869 --rc genhtml_function_coverage=1 00:06:12.869 --rc genhtml_legend=1 00:06:12.869 --rc geninfo_all_blocks=1 00:06:12.869 --rc geninfo_unexecuted_blocks=1 00:06:12.869 00:06:12.869 ' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.869 --rc genhtml_branch_coverage=1 00:06:12.869 --rc genhtml_function_coverage=1 00:06:12.869 --rc genhtml_legend=1 00:06:12.869 --rc geninfo_all_blocks=1 00:06:12.869 --rc geninfo_unexecuted_blocks=1 00:06:12.869 00:06:12.869 ' 00:06:12.869 19:49:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.869 19:49:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.127 ************************************ 00:06:13.127 START TEST thread_poller_perf 00:06:13.127 ************************************ 00:06:13.127 19:49:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.127 [2024-09-30 19:49:57.264184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:13.127 [2024-09-30 19:49:57.264426] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:06:13.127 [2024-09-30 19:49:57.413143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.402 [2024-09-30 19:49:57.562192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.402 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.783 ====================================== 00:06:14.783 busy:2609521918 (cyc) 00:06:14.783 total_run_count: 405000 00:06:14.783 tsc_hz: 2600000000 (cyc) 00:06:14.783 ====================================== 00:06:14.783 poller_cost: 6443 (cyc), 2478 (nsec) 00:06:14.783 ************************************ 00:06:14.783 END TEST thread_poller_perf 00:06:14.783 ************************************ 00:06:14.783 00:06:14.783 real 0m1.534s 00:06:14.783 user 0m1.353s 00:06:14.783 sys 0m0.074s 00:06:14.783 19:49:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.783 19:49:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.783 19:49:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.783 19:49:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:14.783 19:49:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.783 19:49:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.783 ************************************ 00:06:14.783 START TEST thread_poller_perf 00:06:14.783 ************************************ 00:06:14.783 19:49:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.783 [2024-09-30 19:49:58.837946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:14.783 [2024-09-30 19:49:58.838048] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59917 ] 00:06:14.783 [2024-09-30 19:49:58.986181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.783 [2024-09-30 19:49:59.132348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.783 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:16.158 ====================================== 00:06:16.158 busy:2602753960 (cyc) 00:06:16.158 total_run_count: 5263000 00:06:16.158 tsc_hz: 2600000000 (cyc) 00:06:16.158 ====================================== 00:06:16.158 poller_cost: 494 (cyc), 190 (nsec) 00:06:16.158 00:06:16.158 real 0m1.525s 00:06:16.158 user 0m1.350s 00:06:16.158 sys 0m0.068s 00:06:16.158 19:50:00 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.158 19:50:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 ************************************ 00:06:16.158 END TEST thread_poller_perf 00:06:16.158 ************************************ 00:06:16.158 19:50:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:16.158 ************************************ 00:06:16.158 END TEST thread 00:06:16.158 ************************************ 00:06:16.158 00:06:16.158 real 0m3.276s 00:06:16.158 user 0m2.801s 00:06:16.158 sys 0m0.257s 00:06:16.158 19:50:00 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.158 19:50:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 19:50:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:16.158 19:50:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.158 19:50:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.158 19:50:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.158 19:50:00 -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 ************************************ 00:06:16.158 START TEST app_cmdline 00:06:16.158 ************************************ 00:06:16.158 19:50:00 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:16.158 * Looking for test storage... 00:06:16.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:16.158 19:50:00 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.158 19:50:00 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.158 19:50:00 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:16.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.417 19:50:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.417 --rc genhtml_branch_coverage=1 00:06:16.417 --rc genhtml_function_coverage=1 00:06:16.417 --rc genhtml_legend=1 00:06:16.417 --rc geninfo_all_blocks=1 00:06:16.417 --rc geninfo_unexecuted_blocks=1 00:06:16.417 00:06:16.417 ' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.417 --rc genhtml_branch_coverage=1 00:06:16.417 --rc genhtml_function_coverage=1 00:06:16.417 --rc genhtml_legend=1 00:06:16.417 --rc geninfo_all_blocks=1 00:06:16.417 --rc geninfo_unexecuted_blocks=1 00:06:16.417 00:06:16.417 ' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.417 --rc genhtml_branch_coverage=1 00:06:16.417 --rc genhtml_function_coverage=1 00:06:16.417 --rc genhtml_legend=1 00:06:16.417 --rc geninfo_all_blocks=1 00:06:16.417 --rc geninfo_unexecuted_blocks=1 00:06:16.417 00:06:16.417 ' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.417 --rc genhtml_branch_coverage=1 00:06:16.417 --rc genhtml_function_coverage=1 00:06:16.417 --rc genhtml_legend=1 00:06:16.417 --rc geninfo_all_blocks=1 00:06:16.417 --rc geninfo_unexecuted_blocks=1 00:06:16.417 00:06:16.417 ' 00:06:16.417 19:50:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:16.417 19:50:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60001 00:06:16.417 19:50:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60001 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60001 ']' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.417 19:50:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.417 19:50:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.417 [2024-09-30 19:50:00.619999] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:16.417 [2024-09-30 19:50:00.620217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:06:16.417 [2024-09-30 19:50:00.764943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.675 [2024-09-30 19:50:00.948372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.242 19:50:01 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.242 19:50:01 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:17.242 19:50:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:17.500 { 00:06:17.500 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:17.500 "fields": { 00:06:17.500 "major": 25, 00:06:17.500 "minor": 1, 00:06:17.500 "patch": 0, 00:06:17.500 "suffix": "-pre", 00:06:17.500 "commit": "09cc66129" 00:06:17.500 } 00:06:17.500 } 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.500 19:50:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:17.500 19:50:01 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.759 request: 00:06:17.759 { 00:06:17.759 "method": "env_dpdk_get_mem_stats", 00:06:17.759 "req_id": 1 00:06:17.759 } 00:06:17.759 Got JSON-RPC error response 00:06:17.759 response: 00:06:17.759 { 00:06:17.759 "code": -32601, 00:06:17.759 "message": "Method not found" 00:06:17.759 } 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.759 19:50:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60001 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60001 ']' 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60001 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.759 19:50:01 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60001 00:06:17.759 killing process with pid 60001 00:06:17.759 19:50:02 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.759 19:50:02 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.759 19:50:02 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60001' 00:06:17.759 19:50:02 app_cmdline -- common/autotest_common.sh@969 -- # kill 60001 00:06:17.759 19:50:02 app_cmdline -- common/autotest_common.sh@974 -- # wait 60001 00:06:19.132 00:06:19.132 real 0m3.059s 00:06:19.132 user 0m3.285s 00:06:19.132 sys 0m0.426s 00:06:19.132 19:50:03 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.132 ************************************ 00:06:19.132 END TEST app_cmdline 00:06:19.132 ************************************ 00:06:19.132 19:50:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.391 19:50:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:19.391 19:50:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.391 19:50:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.391 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.391 ************************************ 00:06:19.391 START TEST version 00:06:19.391 ************************************ 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:19.391 * Looking for test storage... 00:06:19.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.391 19:50:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.391 19:50:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.391 19:50:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.391 19:50:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.391 19:50:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.391 19:50:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.391 19:50:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.391 19:50:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.391 19:50:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.391 19:50:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.391 19:50:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.391 19:50:03 version -- scripts/common.sh@344 -- # case "$op" in 00:06:19.391 19:50:03 version -- scripts/common.sh@345 -- # : 1 00:06:19.391 19:50:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.391 19:50:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.391 19:50:03 version -- scripts/common.sh@365 -- # decimal 1 00:06:19.391 19:50:03 version -- scripts/common.sh@353 -- # local d=1 00:06:19.391 19:50:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.391 19:50:03 version -- scripts/common.sh@355 -- # echo 1 00:06:19.391 19:50:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.391 19:50:03 version -- scripts/common.sh@366 -- # decimal 2 00:06:19.391 19:50:03 version -- scripts/common.sh@353 -- # local d=2 00:06:19.391 19:50:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.391 19:50:03 version -- scripts/common.sh@355 -- # echo 2 00:06:19.391 19:50:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.391 19:50:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.391 19:50:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.391 19:50:03 version -- scripts/common.sh@368 -- # return 0 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 19:50:03 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.391 --rc genhtml_branch_coverage=1 00:06:19.391 --rc genhtml_function_coverage=1 00:06:19.391 --rc genhtml_legend=1 00:06:19.391 --rc geninfo_all_blocks=1 00:06:19.391 --rc geninfo_unexecuted_blocks=1 00:06:19.391 00:06:19.391 ' 00:06:19.391 19:50:03 version -- app/version.sh@17 -- # get_header_version major 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # cut -f2 00:06:19.391 19:50:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.391 19:50:03 version -- app/version.sh@17 -- # major=25 00:06:19.391 19:50:03 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.391 19:50:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # cut -f2 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.391 19:50:03 version -- app/version.sh@18 -- # minor=1 00:06:19.391 19:50:03 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # cut -f2 00:06:19.391 19:50:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.392 19:50:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.392 19:50:03 version -- app/version.sh@19 -- # patch=0 00:06:19.392 19:50:03 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.392 19:50:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:19.392 19:50:03 version -- app/version.sh@14 -- # cut -f2 00:06:19.392 19:50:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.392 19:50:03 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.392 19:50:03 version -- app/version.sh@22 -- # version=25.1 00:06:19.392 19:50:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.392 19:50:03 version -- app/version.sh@28 -- # version=25.1rc0 00:06:19.392 19:50:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:19.392 19:50:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.392 19:50:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:19.392 19:50:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:19.392 ************************************ 00:06:19.392 END TEST version 00:06:19.392 ************************************ 00:06:19.392 00:06:19.392 real 0m0.193s 00:06:19.392 user 0m0.119s 00:06:19.392 sys 0m0.100s 00:06:19.392 19:50:03 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.392 19:50:03 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.392 19:50:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:19.392 19:50:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:19.392 19:50:03 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.392 19:50:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.392 19:50:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.392 19:50:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.392 19:50:03 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:19.392 19:50:03 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:19.392 19:50:03 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:19.392 19:50:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.392 19:50:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.392 ************************************ 00:06:19.392 START TEST blockdev_nvme 00:06:19.392 ************************************ 00:06:19.392 19:50:03 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:19.652 * Looking for test storage... 00:06:19.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.652 19:50:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.652 --rc genhtml_branch_coverage=1 00:06:19.652 --rc genhtml_function_coverage=1 00:06:19.652 --rc genhtml_legend=1 00:06:19.652 --rc geninfo_all_blocks=1 00:06:19.652 --rc geninfo_unexecuted_blocks=1 00:06:19.652 00:06:19.652 ' 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.652 --rc genhtml_branch_coverage=1 00:06:19.652 --rc genhtml_function_coverage=1 00:06:19.652 --rc genhtml_legend=1 00:06:19.652 --rc geninfo_all_blocks=1 00:06:19.652 --rc geninfo_unexecuted_blocks=1 00:06:19.652 00:06:19.652 ' 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.652 --rc genhtml_branch_coverage=1 00:06:19.652 --rc genhtml_function_coverage=1 00:06:19.652 --rc genhtml_legend=1 00:06:19.652 --rc geninfo_all_blocks=1 00:06:19.652 --rc geninfo_unexecuted_blocks=1 00:06:19.652 00:06:19.652 ' 00:06:19.652 19:50:03 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.652 --rc genhtml_branch_coverage=1 00:06:19.652 --rc genhtml_function_coverage=1 00:06:19.652 --rc genhtml_legend=1 00:06:19.652 --rc geninfo_all_blocks=1 00:06:19.652 --rc geninfo_unexecuted_blocks=1 00:06:19.652 00:06:19.652 ' 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:19.652 19:50:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:19.652 19:50:03 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:19.653 19:50:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60178 00:06:19.653 19:50:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:19.653 19:50:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60178 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60178 ']' 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.653 19:50:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:19.653 19:50:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:19.653 [2024-09-30 19:50:03.962732] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:19.653 [2024-09-30 19:50:03.963141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:06:19.925 [2024-09-30 19:50:04.110839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.925 [2024-09-30 19:50:04.252092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.496 19:50:04 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.496 19:50:04 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:20.496 19:50:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:20.496 19:50:04 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.496 19:50:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.066 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.066 19:50:05 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:21.066 19:50:05 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.066 19:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.066 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.066 19:50:05 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ddb96786-2b62-44b5-a6da-e24c399581be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ddb96786-2b62-44b5-a6da-e24c399581be",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2c66fcb6-7664-4c1f-b022-d0ceedfcf2f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2c66fcb6-7664-4c1f-b022-d0ceedfcf2f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f541d553-24e7-4d8a-868a-d9d014d49f23"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f541d553-24e7-4d8a-868a-d9d014d49f23",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5fa2299e-1c59-430d-b6b4-646537c92779"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5fa2299e-1c59-430d-b6b4-646537c92779",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f9947d80-8a2f-4b0d-95df-b4c94ac92c62"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f9947d80-8a2f-4b0d-95df-b4c94ac92c62",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2470f1e4-8c44-4e6a-8055-1e768f94adfd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2470f1e4-8c44-4e6a-8055-1e768f94adfd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:21.067 19:50:05 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60178 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60178 ']' 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60178 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60178 00:06:21.067 killing process with pid 60178 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60178' 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60178 00:06:21.067 19:50:05 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60178 00:06:22.454 19:50:06 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:22.454 19:50:06 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:22.454 19:50:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:22.454 19:50:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.454 19:50:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 ************************************ 00:06:22.454 START TEST bdev_hello_world 00:06:22.454 ************************************ 00:06:22.454 19:50:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:22.454 [2024-09-30 19:50:06.627746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:22.454 [2024-09-30 19:50:06.627948] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:06:22.454 [2024-09-30 19:50:06.767262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.715 [2024-09-30 19:50:06.910066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.282 [2024-09-30 19:50:07.397782] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:23.282 [2024-09-30 19:50:07.397927] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:23.282 [2024-09-30 19:50:07.397948] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:23.282 [2024-09-30 19:50:07.399880] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:23.282 [2024-09-30 19:50:07.400166] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:23.282 [2024-09-30 19:50:07.400186] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:23.282 [2024-09-30 19:50:07.400701] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:23.282 00:06:23.282 [2024-09-30 19:50:07.400809] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:23.846 ************************************ 00:06:23.846 END TEST bdev_hello_world 00:06:23.846 ************************************ 00:06:23.846 00:06:23.846 real 0m1.477s 00:06:23.846 user 0m1.204s 00:06:23.846 sys 0m0.165s 00:06:23.846 19:50:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.846 19:50:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:23.846 19:50:08 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:23.846 19:50:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:23.846 19:50:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.846 19:50:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.846 ************************************ 00:06:23.846 START TEST bdev_bounds 00:06:23.846 ************************************ 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:23.846 Process bdevio pid: 60293 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60293 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60293' 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60293 00:06:23.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60293 ']' 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.846 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:23.846 [2024-09-30 19:50:08.148291] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:23.846 [2024-09-30 19:50:08.148382] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:06:24.104 [2024-09-30 19:50:08.291364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.104 [2024-09-30 19:50:08.438108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.104 [2024-09-30 19:50:08.438390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.104 [2024-09-30 19:50:08.438403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.670 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.670 19:50:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:24.670 19:50:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:24.926 I/O targets: 00:06:24.926 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:24.926 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:24.926 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:24.926 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:24.926 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:24.926 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:24.926 00:06:24.926 00:06:24.926 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.926 http://cunit.sourceforge.net/ 00:06:24.926 00:06:24.926 00:06:24.926 Suite: bdevio tests on: Nvme3n1 00:06:24.926 Test: blockdev write read block ...passed 00:06:24.926 Test: blockdev write zeroes read block ...passed 00:06:24.926 Test: blockdev write zeroes read no split ...passed 00:06:24.926 Test: blockdev write zeroes read split ...passed 00:06:24.926 Test: blockdev write zeroes read split partial ...passed 00:06:24.926 Test: blockdev reset ...[2024-09-30 19:50:09.125491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:24.926 [2024-09-30 19:50:09.128809] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:24.926 passed 00:06:24.926 Test: blockdev write read 8 blocks ...passed 00:06:24.926 Test: blockdev write read size > 128k ...passed 00:06:24.926 Test: blockdev write read invalid size ...passed 00:06:24.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:24.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:24.926 Test: blockdev write read max offset ...passed 00:06:24.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:24.926 Test: blockdev writev readv 8 blocks ...passed 00:06:24.926 Test: blockdev writev readv 30 x 1block ...passed 00:06:24.926 Test: blockdev writev readv block ...passed 00:06:24.926 Test: blockdev writev readv size > 128k ...passed 00:06:24.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:24.926 Test: blockdev comparev and writev ...[2024-09-30 19:50:09.146752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5e0a000 len:0x1000 00:06:24.926 [2024-09-30 19:50:09.146792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:24.926 passed 00:06:24.926 Test: blockdev nvme passthru rw ...passed 00:06:24.926 Test: blockdev nvme passthru vendor specific ...passed 00:06:24.926 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:09.149415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:24.926 [2024-09-30 19:50:09.149443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:24.926 passed 00:06:24.926 Test: blockdev copy ...passed 00:06:24.926 Suite: bdevio tests on: Nvme2n3 00:06:24.926 Test: blockdev write read block ...passed 00:06:24.926 Test: blockdev write zeroes read block ...passed 00:06:24.926 Test: blockdev write zeroes read no split ...passed 00:06:24.926 Test: blockdev write zeroes read split ...passed 00:06:24.926 Test: blockdev write zeroes read split partial ...passed 00:06:24.926 Test: blockdev reset ...[2024-09-30 19:50:09.206750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:24.926 [2024-09-30 19:50:09.209870] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:24.926 passed 00:06:24.926 Test: blockdev write read 8 blocks ...passed 00:06:24.926 Test: blockdev write read size > 128k ...passed 00:06:24.926 Test: blockdev write read invalid size ...passed 00:06:24.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:24.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:24.926 Test: blockdev write read max offset ...passed 00:06:24.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:24.926 Test: blockdev writev readv 8 blocks ...passed 00:06:24.926 Test: blockdev writev readv 30 x 1block ...passed 00:06:24.926 Test: blockdev writev readv block ...passed 00:06:24.926 Test: blockdev writev readv size > 128k ...passed 00:06:24.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:24.926 Test: blockdev comparev and writev ...[2024-09-30 19:50:09.227755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299e04000 len:0x1000 00:06:24.926 [2024-09-30 19:50:09.227795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:24.926 passed 00:06:24.926 Test: blockdev nvme passthru rw ...passed 00:06:24.926 Test: blockdev nvme passthru vendor specific ...passed 00:06:24.926 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:09.230083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:24.926 [2024-09-30 19:50:09.230122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:24.926 passed 00:06:24.926 Test: blockdev copy ...passed 00:06:24.926 Suite: bdevio tests on: Nvme2n2 00:06:24.926 Test: blockdev write read block ...passed 00:06:24.926 Test: blockdev write zeroes read block ...passed 00:06:24.926 Test: blockdev write zeroes read no split ...passed 00:06:24.926 Test: blockdev write zeroes read split ...passed 00:06:25.183 Test: blockdev write zeroes read split partial ...passed 00:06:25.183 Test: blockdev reset ...[2024-09-30 19:50:09.290955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:25.183 passed 00:06:25.183 Test: blockdev write read 8 blocks ...[2024-09-30 19:50:09.294130] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:25.183 passed 00:06:25.183 Test: blockdev write read size > 128k ...passed 00:06:25.183 Test: blockdev write read invalid size ...passed 00:06:25.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.183 Test: blockdev write read max offset ...passed 00:06:25.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.183 Test: blockdev writev readv 8 blocks ...passed 00:06:25.183 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.183 Test: blockdev writev readv block ...passed 00:06:25.183 Test: blockdev writev readv size > 128k ...passed 00:06:25.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.183 Test: blockdev comparev and writev ...[2024-09-30 19:50:09.313882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa3a000 len:0x1000 00:06:25.183 [2024-09-30 19:50:09.313923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.183 passed 00:06:25.183 Test: blockdev nvme passthru rw ...passed 00:06:25.183 Test: blockdev nvme passthru vendor specific ...passed 00:06:25.183 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:09.316158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.183 [2024-09-30 19:50:09.316184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.183 passed 00:06:25.183 Test: blockdev copy ...passed 00:06:25.183 Suite: bdevio tests on: Nvme2n1 00:06:25.183 Test: blockdev write read block ...passed 00:06:25.183 Test: blockdev write zeroes read block ...passed 00:06:25.183 Test: blockdev write zeroes read no split ...passed 00:06:25.183 Test: blockdev write zeroes read split ...passed 00:06:25.183 Test: blockdev write zeroes read split partial ...passed 00:06:25.183 Test: blockdev reset ...[2024-09-30 19:50:09.374554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:25.183 passed 00:06:25.183 Test: blockdev write read 8 blocks ...[2024-09-30 19:50:09.378770] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:25.183 passed 00:06:25.183 Test: blockdev write read size > 128k ...passed 00:06:25.183 Test: blockdev write read invalid size ...passed 00:06:25.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.183 Test: blockdev write read max offset ...passed 00:06:25.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.183 Test: blockdev writev readv 8 blocks ...passed 00:06:25.183 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.183 Test: blockdev writev readv block ...passed 00:06:25.183 Test: blockdev writev readv size > 128k ...passed 00:06:25.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.183 Test: blockdev comparev and writev ...[2024-09-30 19:50:09.397904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa34000 len:0x1000 00:06:25.183 [2024-09-30 19:50:09.397942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.183 passed 00:06:25.183 Test: blockdev nvme passthru rw ...passed 00:06:25.183 Test: blockdev nvme passthru vendor specific ...[2024-09-30 19:50:09.400217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.183 [2024-09-30 19:50:09.400248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.183 passed 00:06:25.183 Test: blockdev nvme admin passthru ...passed 00:06:25.183 Test: blockdev copy ...passed 00:06:25.183 Suite: bdevio tests on: Nvme1n1 00:06:25.183 Test: blockdev write read block ...passed 00:06:25.183 Test: blockdev write zeroes read block ...passed 00:06:25.183 Test: blockdev write zeroes read no split ...passed 00:06:25.183 Test: blockdev write zeroes read split ...passed 00:06:25.183 Test: blockdev write zeroes read split partial ...passed 00:06:25.183 Test: blockdev reset ...[2024-09-30 19:50:09.456705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:25.183 passed[2024-09-30 19:50:09.460780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:25.183 00:06:25.183 Test: blockdev write read 8 blocks ...passed 00:06:25.183 Test: blockdev write read size > 128k ...passed 00:06:25.183 Test: blockdev write read invalid size ...passed 00:06:25.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.183 Test: blockdev write read max offset ...passed 00:06:25.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.183 Test: blockdev writev readv 8 blocks ...passed 00:06:25.183 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.183 Test: blockdev writev readv block ...passed 00:06:25.183 Test: blockdev writev readv size > 128k ...passed 00:06:25.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.184 Test: blockdev comparev and writev ...[2024-09-30 19:50:09.471960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa30000 len:0x1000 00:06:25.184 [2024-09-30 19:50:09.472001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:25.184 passed 00:06:25.184 Test: blockdev nvme passthru rw ...passed 00:06:25.184 Test: blockdev nvme passthru vendor specific ...[2024-09-30 19:50:09.473353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:25.184 [2024-09-30 19:50:09.473381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:25.184 passed 00:06:25.184 Test: blockdev nvme admin passthru ...passed 00:06:25.184 Test: blockdev copy ...passed 00:06:25.184 Suite: bdevio tests on: Nvme0n1 00:06:25.184 Test: blockdev write read block ...passed 00:06:25.184 Test: blockdev write zeroes read block ...passed 00:06:25.184 Test: blockdev write zeroes read no split ...passed 00:06:25.184 Test: blockdev write zeroes read split ...passed 00:06:25.184 Test: blockdev write zeroes read split partial ...passed 00:06:25.184 Test: blockdev reset ...[2024-09-30 19:50:09.529106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:25.184 passed 00:06:25.184 Test: blockdev write read 8 blocks ...[2024-09-30 19:50:09.532985] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:25.184 passed 00:06:25.184 Test: blockdev write read size > 128k ...passed 00:06:25.184 Test: blockdev write read invalid size ...passed 00:06:25.184 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:25.184 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:25.184 Test: blockdev write read max offset ...passed 00:06:25.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:25.184 Test: blockdev writev readv 8 blocks ...passed 00:06:25.184 Test: blockdev writev readv 30 x 1block ...passed 00:06:25.184 Test: blockdev writev readv block ...passed 00:06:25.184 Test: blockdev writev readv size > 128k ...passed 00:06:25.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:25.441 Test: blockdev comparev and writev ...passed 00:06:25.441 Test: blockdev nvme passthru rw ...[2024-09-30 19:50:09.547145] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:25.441 separate metadata which is not supported yet. 00:06:25.441 passed 00:06:25.441 Test: blockdev nvme passthru vendor specific ...passed 00:06:25.441 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:09.548894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:25.441 [2024-09-30 19:50:09.548929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:25.441 passed 00:06:25.441 Test: blockdev copy ...passed 00:06:25.441 00:06:25.441 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.441 suites 6 6 n/a 0 0 00:06:25.441 tests 138 138 138 0 0 00:06:25.441 asserts 893 893 893 0 n/a 00:06:25.441 00:06:25.441 Elapsed time = 1.220 seconds 00:06:25.441 0 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60293 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60293 ']' 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60293 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60293 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60293' 00:06:25.441 killing process with pid 60293 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60293 00:06:25.441 19:50:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60293 00:06:26.006 19:50:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:26.006 00:06:26.006 real 0m2.199s 00:06:26.006 user 0m5.481s 00:06:26.006 sys 0m0.276s 00:06:26.006 19:50:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.006 ************************************ 00:06:26.006 END TEST bdev_bounds 00:06:26.006 ************************************ 00:06:26.006 19:50:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:26.007 19:50:10 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:26.007 19:50:10 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:26.007 19:50:10 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.007 19:50:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.007 ************************************ 00:06:26.007 START TEST bdev_nbd 00:06:26.007 ************************************ 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60347 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60347 /var/tmp/spdk-nbd.sock 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60347 ']' 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:26.007 19:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:26.265 [2024-09-30 19:50:10.430708] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:26.265 [2024-09-30 19:50:10.430820] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.265 [2024-09-30 19:50:10.580366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.522 [2024-09-30 19:50:10.767235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.086 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.343 1+0 records in 00:06:27.343 1+0 records out 00:06:27.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343026 s, 11.9 MB/s 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.343 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.601 1+0 records in 00:06:27.601 1+0 records out 00:06:27.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935248 s, 4.4 MB/s 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.601 19:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.859 1+0 records in 00:06:27.859 1+0 records out 00:06:27.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868199 s, 4.7 MB/s 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:27.859 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.119 1+0 records in 00:06:28.119 1+0 records out 00:06:28.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880633 s, 4.7 MB/s 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.119 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.380 1+0 records in 00:06:28.380 1+0 records out 00:06:28.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730583 s, 5.6 MB/s 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.380 1+0 records in 00:06:28.380 1+0 records out 00:06:28.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431042 s, 9.5 MB/s 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.380 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd0", 00:06:28.641 "bdev_name": "Nvme0n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd1", 00:06:28.641 "bdev_name": "Nvme1n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd2", 00:06:28.641 "bdev_name": "Nvme2n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd3", 00:06:28.641 "bdev_name": "Nvme2n2" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd4", 00:06:28.641 "bdev_name": "Nvme2n3" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd5", 00:06:28.641 "bdev_name": "Nvme3n1" 00:06:28.641 } 00:06:28.641 ]' 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd0", 00:06:28.641 "bdev_name": "Nvme0n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd1", 00:06:28.641 "bdev_name": "Nvme1n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd2", 00:06:28.641 "bdev_name": "Nvme2n1" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd3", 00:06:28.641 "bdev_name": "Nvme2n2" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd4", 00:06:28.641 "bdev_name": "Nvme2n3" 00:06:28.641 }, 00:06:28.641 { 00:06:28.641 "nbd_device": "/dev/nbd5", 00:06:28.641 "bdev_name": "Nvme3n1" 00:06:28.641 } 00:06:28.641 ]' 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.641 19:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.900 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.158 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.417 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.677 19:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.677 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.935 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.193 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:30.451 /dev/nbd0 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.451 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.451 1+0 records in 00:06:30.451 1+0 records out 00:06:30.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489587 s, 8.4 MB/s 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.452 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:30.709 /dev/nbd1 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.709 1+0 records in 00:06:30.709 1+0 records out 00:06:30.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327965 s, 12.5 MB/s 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.709 19:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:30.710 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.710 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.710 19:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:30.968 /dev/nbd10 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.968 1+0 records in 00:06:30.968 1+0 records out 00:06:30.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468322 s, 8.7 MB/s 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:30.968 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:31.226 /dev/nbd11 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.226 1+0 records in 00:06:31.226 1+0 records out 00:06:31.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479815 s, 8.5 MB/s 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:31.226 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.227 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.227 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:31.227 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.227 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.227 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:31.485 /dev/nbd12 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.485 1+0 records in 00:06:31.485 1+0 records out 00:06:31.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529749 s, 7.7 MB/s 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:31.485 /dev/nbd13 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:31.485 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:31.745 1+0 records in 00:06:31.745 1+0 records out 00:06:31.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555982 s, 7.4 MB/s 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.745 19:50:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd0", 00:06:31.745 "bdev_name": "Nvme0n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd1", 00:06:31.745 "bdev_name": "Nvme1n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd10", 00:06:31.745 "bdev_name": "Nvme2n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd11", 00:06:31.745 "bdev_name": "Nvme2n2" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd12", 00:06:31.745 "bdev_name": "Nvme2n3" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd13", 00:06:31.745 "bdev_name": "Nvme3n1" 00:06:31.745 } 00:06:31.745 ]' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd0", 00:06:31.745 "bdev_name": "Nvme0n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd1", 00:06:31.745 "bdev_name": "Nvme1n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd10", 00:06:31.745 "bdev_name": "Nvme2n1" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd11", 00:06:31.745 "bdev_name": "Nvme2n2" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd12", 00:06:31.745 "bdev_name": "Nvme2n3" 00:06:31.745 }, 00:06:31.745 { 00:06:31.745 "nbd_device": "/dev/nbd13", 00:06:31.745 "bdev_name": "Nvme3n1" 00:06:31.745 } 00:06:31.745 ]' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.745 /dev/nbd1 00:06:31.745 /dev/nbd10 00:06:31.745 /dev/nbd11 00:06:31.745 /dev/nbd12 00:06:31.745 /dev/nbd13' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.745 /dev/nbd1 00:06:31.745 /dev/nbd10 00:06:31.745 /dev/nbd11 00:06:31.745 /dev/nbd12 00:06:31.745 /dev/nbd13' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:31.745 256+0 records in 00:06:31.745 256+0 records out 00:06:31.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00816043 s, 128 MB/s 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.745 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.004 256+0 records in 00:06:32.004 256+0 records out 00:06:32.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0502758 s, 20.9 MB/s 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.004 256+0 records in 00:06:32.004 256+0 records out 00:06:32.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0506445 s, 20.7 MB/s 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:32.004 256+0 records in 00:06:32.004 256+0 records out 00:06:32.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0549474 s, 19.1 MB/s 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:32.004 256+0 records in 00:06:32.004 256+0 records out 00:06:32.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0522665 s, 20.1 MB/s 00:06:32.004 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.005 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:32.263 256+0 records in 00:06:32.263 256+0 records out 00:06:32.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0524083 s, 20.0 MB/s 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:32.263 256+0 records in 00:06:32.263 256+0 records out 00:06:32.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0723273 s, 14.5 MB/s 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:32.263 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.264 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.522 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.780 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.780 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.780 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.780 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.781 19:50:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.781 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.039 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:33.297 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:33.297 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.298 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:33.556 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:33.556 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:33.556 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.557 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.815 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.815 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.815 19:50:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:33.815 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:34.073 malloc_lvol_verify 00:06:34.073 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:34.073 883192ca-97e5-4dea-a4a6-39d94d28a774 00:06:34.073 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:34.332 45a29f54-3a58-46bf-a92d-e56fbe47d269 00:06:34.332 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:34.591 /dev/nbd0 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:34.591 mke2fs 1.47.0 (5-Feb-2023) 00:06:34.591 Discarding device blocks: 0/4096 done 00:06:34.591 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:34.591 00:06:34.591 Allocating group tables: 0/1 done 00:06:34.591 Writing inode tables: 0/1 done 00:06:34.591 Creating journal (1024 blocks): done 00:06:34.591 Writing superblocks and filesystem accounting information: 0/1 done 00:06:34.591 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.591 19:50:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60347 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60347 ']' 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60347 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60347 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.849 killing process with pid 60347 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60347' 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60347 00:06:34.849 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60347 00:06:35.415 19:50:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:35.415 00:06:35.415 real 0m9.410s 00:06:35.415 user 0m13.548s 00:06:35.415 sys 0m3.011s 00:06:35.415 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.415 19:50:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:35.415 ************************************ 00:06:35.415 END TEST bdev_nbd 00:06:35.415 ************************************ 00:06:35.673 19:50:19 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:35.673 19:50:19 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:35.673 skipping fio tests on NVMe due to multi-ns failures. 00:06:35.673 19:50:19 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:35.673 19:50:19 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:35.673 19:50:19 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:35.673 19:50:19 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:35.673 19:50:19 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.673 19:50:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.673 ************************************ 00:06:35.673 START TEST bdev_verify 00:06:35.673 ************************************ 00:06:35.673 19:50:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:35.673 [2024-09-30 19:50:19.875859] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:35.673 [2024-09-30 19:50:19.875973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:06:35.673 [2024-09-30 19:50:20.027130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.930 [2024-09-30 19:50:20.208028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.930 [2024-09-30 19:50:20.208221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.509 Running I/O for 5 seconds... 00:06:41.612 22400.00 IOPS, 87.50 MiB/s 22592.00 IOPS, 88.25 MiB/s 22528.00 IOPS, 88.00 MiB/s 23056.00 IOPS, 90.06 MiB/s 22617.60 IOPS, 88.35 MiB/s 00:06:41.612 Latency(us) 00:06:41.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.612 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0xbd0bd 00:06:41.612 Nvme0n1 : 5.05 1902.76 7.43 0.00 0.00 66966.09 10284.11 74610.22 00:06:41.612 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:41.612 Nvme0n1 : 5.05 1799.92 7.03 0.00 0.00 70765.15 12048.54 69770.63 00:06:41.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0xa0000 00:06:41.612 Nvme1n1 : 5.06 1908.39 7.45 0.00 0.00 66741.40 6503.19 73400.32 00:06:41.612 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0xa0000 length 0xa0000 00:06:41.612 Nvme1n1 : 5.08 1803.23 7.04 0.00 0.00 70527.39 7511.43 67350.84 00:06:41.612 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0x80000 00:06:41.612 Nvme2n1 : 5.07 1907.65 7.45 0.00 0.00 66623.79 7461.02 71383.83 00:06:41.612 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x80000 length 0x80000 00:06:41.612 Nvme2n1 : 5.08 1802.76 7.04 0.00 0.00 70395.80 7864.32 65334.35 00:06:41.612 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0x80000 00:06:41.612 Nvme2n2 : 5.07 1907.11 7.45 0.00 0.00 66505.16 6956.90 67350.84 00:06:41.612 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x80000 length 0x80000 00:06:41.612 Nvme2n2 : 5.09 1811.24 7.08 0.00 0.00 70067.91 9124.63 62511.26 00:06:41.612 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0x80000 00:06:41.612 Nvme2n3 : 5.08 1914.88 7.48 0.00 0.00 66219.19 10284.11 69367.34 00:06:41.612 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x80000 length 0x80000 00:06:41.612 Nvme2n3 : 5.09 1810.76 7.07 0.00 0.00 69939.87 9427.10 68157.44 00:06:41.612 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x0 length 0x20000 00:06:41.612 Nvme3n1 : 5.08 1914.38 7.48 0.00 0.00 66103.76 9326.28 73803.62 00:06:41.612 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:41.612 Verification LBA range: start 0x20000 length 0x20000 00:06:41.612 Nvme3n1 : 5.09 1810.29 7.07 0.00 0.00 69830.65 7713.08 70980.53 00:06:41.612 =================================================================================================================== 00:06:41.612 Total : 22293.38 87.08 0.00 0.00 68339.46 6503.19 74610.22 00:06:42.983 00:06:42.983 real 0m7.340s 00:06:42.983 user 0m13.582s 00:06:42.983 sys 0m0.234s 00:06:42.983 19:50:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.983 ************************************ 00:06:42.983 END TEST bdev_verify 00:06:42.983 ************************************ 00:06:42.983 19:50:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.983 19:50:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.983 19:50:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:42.983 19:50:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.983 19:50:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.983 ************************************ 00:06:42.983 START TEST bdev_verify_big_io 00:06:42.983 ************************************ 00:06:42.983 19:50:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:42.983 [2024-09-30 19:50:27.278819] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.983 [2024-09-30 19:50:27.278942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:06:43.241 [2024-09-30 19:50:27.429042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.498 [2024-09-30 19:50:27.619647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.498 [2024-09-30 19:50:27.619745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.064 Running I/O for 5 seconds... 00:06:50.038 656.00 IOPS, 41.00 MiB/s 1041.50 IOPS, 65.09 MiB/s 1300.00 IOPS, 81.25 MiB/s 1606.75 IOPS, 100.42 MiB/s 00:06:50.038 Latency(us) 00:06:50.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0xbd0b 00:06:50.038 Nvme0n1 : 5.64 141.89 8.87 0.00 0.00 876434.98 15325.34 1051802.39 00:06:50.038 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:50.038 Nvme0n1 : 5.75 113.01 7.06 0.00 0.00 1069786.55 10788.23 1245385.65 00:06:50.038 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0xa000 00:06:50.038 Nvme1n1 : 5.60 148.56 9.29 0.00 0.00 807211.81 74206.92 871124.68 00:06:50.038 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0xa000 length 0xa000 00:06:50.038 Nvme1n1 : 5.87 119.41 7.46 0.00 0.00 988670.93 114536.76 1025991.29 00:06:50.038 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0x8000 00:06:50.038 Nvme2n1 : 5.72 151.45 9.47 0.00 0.00 763609.31 37708.41 796917.76 00:06:50.038 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x8000 length 0x8000 00:06:50.038 Nvme2n1 : 5.87 111.54 6.97 0.00 0.00 1018920.33 116149.96 1768060.46 00:06:50.038 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0x8000 00:06:50.038 Nvme2n2 : 5.72 156.65 9.79 0.00 0.00 722955.25 77836.60 822728.86 00:06:50.038 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x8000 length 0x8000 00:06:50.038 Nvme2n2 : 5.94 120.79 7.55 0.00 0.00 916690.73 19156.68 1793871.56 00:06:50.038 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0x8000 00:06:50.038 Nvme2n3 : 5.87 170.28 10.64 0.00 0.00 645850.88 25407.80 884030.23 00:06:50.038 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x8000 length 0x8000 00:06:50.038 Nvme2n3 : 5.99 136.47 8.53 0.00 0.00 778827.15 14216.27 1413157.81 00:06:50.038 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x0 length 0x2000 00:06:50.038 Nvme3n1 : 5.92 190.75 11.92 0.00 0.00 561779.84 696.32 961463.53 00:06:50.038 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:50.038 Verification LBA range: start 0x2000 length 0x2000 00:06:50.038 Nvme3n1 : 6.06 187.22 11.70 0.00 0.00 554346.05 378.09 1845493.76 00:06:50.038 =================================================================================================================== 00:06:50.038 Total : 1748.02 109.25 0.00 0.00 778643.30 378.09 1845493.76 00:06:51.412 00:06:51.412 real 0m8.542s 00:06:51.412 user 0m16.071s 00:06:51.412 sys 0m0.205s 00:06:51.413 19:50:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.413 19:50:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:51.413 ************************************ 00:06:51.413 END TEST bdev_verify_big_io 00:06:51.413 ************************************ 00:06:51.670 19:50:35 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.670 19:50:35 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:51.670 19:50:35 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.670 19:50:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.670 ************************************ 00:06:51.670 START TEST bdev_write_zeroes 00:06:51.670 ************************************ 00:06:51.670 19:50:35 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.670 [2024-09-30 19:50:35.870057] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:51.670 [2024-09-30 19:50:35.870148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60929 ] 00:06:51.670 [2024-09-30 19:50:36.005490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.927 [2024-09-30 19:50:36.151409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.492 Running I/O for 1 seconds... 00:06:53.423 71040.00 IOPS, 277.50 MiB/s 00:06:53.424 Latency(us) 00:06:53.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.424 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme0n1 : 1.02 11822.47 46.18 0.00 0.00 10807.31 4965.61 20769.87 00:06:53.424 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme1n1 : 1.02 11808.99 46.13 0.00 0.00 10807.63 7461.02 19459.15 00:06:53.424 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme2n1 : 1.02 11795.60 46.08 0.00 0.00 10799.19 7914.73 18854.20 00:06:53.424 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme2n2 : 1.02 11781.97 46.02 0.00 0.00 10796.92 7813.91 18350.08 00:06:53.424 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme2n3 : 1.02 11768.55 45.97 0.00 0.00 10789.36 7410.61 18450.90 00:06:53.424 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:53.424 Nvme3n1 : 1.02 11755.28 45.92 0.00 0.00 10779.19 6755.25 20064.10 00:06:53.424 =================================================================================================================== 00:06:53.424 Total : 70732.87 276.30 0.00 0.00 10796.60 4965.61 20769.87 00:06:54.355 00:06:54.355 real 0m2.714s 00:06:54.355 user 0m2.428s 00:06:54.355 sys 0m0.174s 00:06:54.355 19:50:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.355 19:50:38 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:54.355 ************************************ 00:06:54.355 END TEST bdev_write_zeroes 00:06:54.355 ************************************ 00:06:54.355 19:50:38 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.355 19:50:38 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:54.355 19:50:38 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.355 19:50:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.355 ************************************ 00:06:54.355 START TEST bdev_json_nonenclosed 00:06:54.355 ************************************ 00:06:54.355 19:50:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:54.355 [2024-09-30 19:50:38.633898] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:54.355 [2024-09-30 19:50:38.634012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60982 ] 00:06:54.613 [2024-09-30 19:50:38.774838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.614 [2024-09-30 19:50:38.952097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.614 [2024-09-30 19:50:38.952177] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:54.614 [2024-09-30 19:50:38.952195] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:54.614 [2024-09-30 19:50:38.952205] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.873 00:06:54.873 real 0m0.659s 00:06:54.873 user 0m0.463s 00:06:54.873 sys 0m0.092s 00:06:54.873 19:50:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.873 ************************************ 00:06:54.873 END TEST bdev_json_nonenclosed 00:06:54.873 ************************************ 00:06:54.873 19:50:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 19:50:39 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.132 19:50:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:55.132 19:50:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.132 19:50:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.132 ************************************ 00:06:55.132 START TEST bdev_json_nonarray 00:06:55.132 ************************************ 00:06:55.132 19:50:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.132 [2024-09-30 19:50:39.332746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.132 [2024-09-30 19:50:39.332857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61008 ] 00:06:55.132 [2024-09-30 19:50:39.483082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.390 [2024-09-30 19:50:39.663679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.390 [2024-09-30 19:50:39.663772] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:55.390 [2024-09-30 19:50:39.663790] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:55.390 [2024-09-30 19:50:39.663799] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.649 00:06:55.649 real 0m0.680s 00:06:55.649 user 0m0.471s 00:06:55.649 sys 0m0.104s 00:06:55.649 19:50:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.649 19:50:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:55.649 ************************************ 00:06:55.649 END TEST bdev_json_nonarray 00:06:55.649 ************************************ 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:55.649 19:50:39 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:55.649 ************************************ 00:06:55.649 END TEST blockdev_nvme 00:06:55.649 ************************************ 00:06:55.649 00:06:55.649 real 0m36.238s 00:06:55.649 user 0m56.201s 00:06:55.649 sys 0m4.963s 00:06:55.649 19:50:39 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.649 19:50:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.908 19:50:40 -- spdk/autotest.sh@209 -- # uname -s 00:06:55.908 19:50:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:55.908 19:50:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:55.908 19:50:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.908 19:50:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.908 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.908 ************************************ 00:06:55.908 START TEST blockdev_nvme_gpt 00:06:55.908 ************************************ 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:55.908 * Looking for test storage... 00:06:55.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.908 19:50:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.908 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.908 --rc genhtml_branch_coverage=1 00:06:55.908 --rc genhtml_function_coverage=1 00:06:55.908 --rc genhtml_legend=1 00:06:55.908 --rc geninfo_all_blocks=1 00:06:55.908 --rc geninfo_unexecuted_blocks=1 00:06:55.908 00:06:55.908 ' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.909 --rc genhtml_branch_coverage=1 00:06:55.909 --rc genhtml_function_coverage=1 00:06:55.909 --rc genhtml_legend=1 00:06:55.909 --rc geninfo_all_blocks=1 00:06:55.909 --rc geninfo_unexecuted_blocks=1 00:06:55.909 00:06:55.909 ' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.909 --rc genhtml_branch_coverage=1 00:06:55.909 --rc genhtml_function_coverage=1 00:06:55.909 --rc genhtml_legend=1 00:06:55.909 --rc geninfo_all_blocks=1 00:06:55.909 --rc geninfo_unexecuted_blocks=1 00:06:55.909 00:06:55.909 ' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.909 --rc genhtml_branch_coverage=1 00:06:55.909 --rc genhtml_function_coverage=1 00:06:55.909 --rc genhtml_legend=1 00:06:55.909 --rc geninfo_all_blocks=1 00:06:55.909 --rc geninfo_unexecuted_blocks=1 00:06:55.909 00:06:55.909 ' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61086 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61086 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61086 ']' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.909 19:50:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.909 [2024-09-30 19:50:40.263993] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.909 [2024-09-30 19:50:40.264110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61086 ] 00:06:56.168 [2024-09-30 19:50:40.410577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.425 [2024-09-30 19:50:40.587321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.991 19:50:41 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.991 19:50:41 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:56.991 19:50:41 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:56.991 19:50:41 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:56.991 19:50:41 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:57.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.250 Waiting for block devices as requested 00:06:57.509 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.509 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.509 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:57.509 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.777 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:02.777 BYT; 00:07:02.777 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:02.777 BYT; 00:07:02.777 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.777 19:50:46 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:02.777 19:50:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:03.714 The operation has completed successfully. 00:07:03.714 19:50:47 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:04.650 The operation has completed successfully. 00:07:04.650 19:50:48 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.483 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.483 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.483 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.741 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:05.741 19:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.741 19:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.741 [] 00:07:05.741 19:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:05.741 19:50:49 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:05.742 19:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.742 19:50:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:06.000 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.000 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.260 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:06.260 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:06.261 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2d40bc0a-d091-4a07-9e1c-02508a6e7d0b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2d40bc0a-d091-4a07-9e1c-02508a6e7d0b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f0245c87-2dd7-424f-ba19-21a8c4d51082"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f0245c87-2dd7-424f-ba19-21a8c4d51082",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ff0bff8a-263c-4ce5-b430-09bc2393cba2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ff0bff8a-263c-4ce5-b430-09bc2393cba2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f83ada26-2c66-4fe2-ac63-1248cf429825"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f83ada26-2c66-4fe2-ac63-1248cf429825",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f2655c16-863f-418d-aa34-b59510b4c04c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f2655c16-863f-418d-aa34-b59510b4c04c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.261 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.261 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:06.261 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:06.261 19:50:50 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61086 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61086 ']' 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61086 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61086 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.261 killing process with pid 61086 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61086' 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61086 00:07:06.261 19:50:50 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61086 00:07:07.635 19:50:51 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.635 19:50:51 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:07.635 19:50:51 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:07.635 19:50:51 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.635 19:50:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 ************************************ 00:07:07.636 START TEST bdev_hello_world 00:07:07.636 ************************************ 00:07:07.636 19:50:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:07.636 [2024-09-30 19:50:51.739264] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:07.636 [2024-09-30 19:50:51.739391] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61713 ] 00:07:07.636 [2024-09-30 19:50:51.884579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.894 [2024-09-30 19:50:52.029868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.461 [2024-09-30 19:50:52.519189] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:08.461 [2024-09-30 19:50:52.519233] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:08.461 [2024-09-30 19:50:52.519248] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:08.461 [2024-09-30 19:50:52.521205] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:08.461 [2024-09-30 19:50:52.521785] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:08.461 [2024-09-30 19:50:52.521810] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:08.461 [2024-09-30 19:50:52.522063] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:08.461 00:07:08.461 [2024-09-30 19:50:52.522088] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:09.026 00:07:09.026 real 0m1.478s 00:07:09.026 user 0m1.213s 00:07:09.026 sys 0m0.161s 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.026 ************************************ 00:07:09.026 END TEST bdev_hello_world 00:07:09.026 ************************************ 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 19:50:53 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:09.026 19:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.026 19:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.026 19:50:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 ************************************ 00:07:09.026 START TEST bdev_bounds 00:07:09.026 ************************************ 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61749 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.026 Process bdevio pid: 61749 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61749' 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61749 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61749 ']' 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.026 19:50:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-09-30 19:50:53.251311] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:09.026 [2024-09-30 19:50:53.251438] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61749 ] 00:07:09.285 [2024-09-30 19:50:53.399246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.285 [2024-09-30 19:50:53.549870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.285 [2024-09-30 19:50:53.550177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.285 [2024-09-30 19:50:53.550196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.852 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.852 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:09.852 19:50:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:09.852 I/O targets: 00:07:09.852 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:09.852 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:09.852 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:09.852 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.852 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.852 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.852 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:09.852 00:07:09.852 00:07:09.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.852 http://cunit.sourceforge.net/ 00:07:09.852 00:07:09.852 00:07:09.852 Suite: bdevio tests on: Nvme3n1 00:07:09.852 Test: blockdev write read block ...passed 00:07:09.852 Test: blockdev write zeroes read block ...passed 00:07:09.852 Test: blockdev write zeroes read no split ...passed 00:07:09.852 Test: blockdev write zeroes read split ...passed 00:07:10.111 Test: blockdev write zeroes read split partial ...passed 00:07:10.111 Test: blockdev reset ...[2024-09-30 19:50:54.227051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:10.111 [2024-09-30 19:50:54.229947] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.111 passed 00:07:10.111 Test: blockdev write read 8 blocks ...passed 00:07:10.111 Test: blockdev write read size > 128k ...passed 00:07:10.111 Test: blockdev write read invalid size ...passed 00:07:10.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.111 Test: blockdev write read max offset ...passed 00:07:10.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.111 Test: blockdev writev readv 8 blocks ...passed 00:07:10.111 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.111 Test: blockdev writev readv block ...passed 00:07:10.111 Test: blockdev writev readv size > 128k ...passed 00:07:10.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.111 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.236580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bac06000 len:0x1000 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme passthru rw ...[2024-09-30 19:50:54.236627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme passthru vendor specific ...[2024-09-30 19:50:54.237157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.111 [2024-09-30 19:50:54.237176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme admin passthru ...passed 00:07:10.111 Test: blockdev copy ...passed 00:07:10.111 Suite: bdevio tests on: Nvme2n3 00:07:10.111 Test: blockdev write read block ...passed 00:07:10.111 Test: blockdev write zeroes read block ...passed 00:07:10.111 Test: blockdev write zeroes read no split ...passed 00:07:10.111 Test: blockdev write zeroes read split ...passed 00:07:10.111 Test: blockdev write zeroes read split partial ...passed 00:07:10.111 Test: blockdev reset ...[2024-09-30 19:50:54.279215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:10.111 [2024-09-30 19:50:54.281997] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.111 passed 00:07:10.111 Test: blockdev write read 8 blocks ...passed 00:07:10.111 Test: blockdev write read size > 128k ...passed 00:07:10.111 Test: blockdev write read invalid size ...passed 00:07:10.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.111 Test: blockdev write read max offset ...passed 00:07:10.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.111 Test: blockdev writev readv 8 blocks ...passed 00:07:10.111 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.111 Test: blockdev writev readv block ...passed 00:07:10.111 Test: blockdev writev readv size > 128k ...passed 00:07:10.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.111 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.288609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c583c000 len:0x1000 00:07:10.111 [2024-09-30 19:50:54.288647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme passthru rw ...passed 00:07:10.111 Test: blockdev nvme passthru vendor specific ...[2024-09-30 19:50:54.289342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.111 [2024-09-30 19:50:54.289360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme admin passthru ...passed 00:07:10.111 Test: blockdev copy ...passed 00:07:10.111 Suite: bdevio tests on: Nvme2n2 00:07:10.111 Test: blockdev write read block ...passed 00:07:10.111 Test: blockdev write zeroes read block ...passed 00:07:10.111 Test: blockdev write zeroes read no split ...passed 00:07:10.111 Test: blockdev write zeroes read split ...passed 00:07:10.111 Test: blockdev write zeroes read split partial ...passed 00:07:10.111 Test: blockdev reset ...[2024-09-30 19:50:54.344185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:10.111 [2024-09-30 19:50:54.346992] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.111 passed 00:07:10.111 Test: blockdev write read 8 blocks ...passed 00:07:10.111 Test: blockdev write read size > 128k ...passed 00:07:10.111 Test: blockdev write read invalid size ...passed 00:07:10.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.111 Test: blockdev write read max offset ...passed 00:07:10.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.111 Test: blockdev writev readv 8 blocks ...passed 00:07:10.111 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.111 Test: blockdev writev readv block ...passed 00:07:10.111 Test: blockdev writev readv size > 128k ...passed 00:07:10.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.111 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.353422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5836000 len:0x1000 00:07:10.111 [2024-09-30 19:50:54.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev nvme passthru rw ...passed 00:07:10.111 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.111 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:54.354138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.111 [2024-09-30 19:50:54.354167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.111 passed 00:07:10.111 Test: blockdev copy ...passed 00:07:10.111 Suite: bdevio tests on: Nvme2n1 00:07:10.111 Test: blockdev write read block ...passed 00:07:10.111 Test: blockdev write zeroes read block ...passed 00:07:10.111 Test: blockdev write zeroes read no split ...passed 00:07:10.111 Test: blockdev write zeroes read split ...passed 00:07:10.111 Test: blockdev write zeroes read split partial ...passed 00:07:10.111 Test: blockdev reset ...[2024-09-30 19:50:54.398259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:10.111 [2024-09-30 19:50:54.400917] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.111 passed 00:07:10.111 Test: blockdev write read 8 blocks ...passed 00:07:10.111 Test: blockdev write read size > 128k ...passed 00:07:10.111 Test: blockdev write read invalid size ...passed 00:07:10.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.112 Test: blockdev write read max offset ...passed 00:07:10.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.112 Test: blockdev writev readv 8 blocks ...passed 00:07:10.112 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.112 Test: blockdev writev readv block ...passed 00:07:10.112 Test: blockdev writev readv size > 128k ...passed 00:07:10.112 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.112 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5832000 len:0x1000 00:07:10.112 passed 00:07:10.112 Test: blockdev nvme passthru rw ...[2024-09-30 19:50:54.407448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.112 passed 00:07:10.112 Test: blockdev nvme passthru vendor specific ...[2024-09-30 19:50:54.408182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:10.112 [2024-09-30 19:50:54.408212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:10.112 passed 00:07:10.112 Test: blockdev nvme admin passthru ...passed 00:07:10.112 Test: blockdev copy ...passed 00:07:10.112 Suite: bdevio tests on: Nvme1n1p2 00:07:10.112 Test: blockdev write read block ...passed 00:07:10.112 Test: blockdev write zeroes read block ...passed 00:07:10.112 Test: blockdev write zeroes read no split ...passed 00:07:10.112 Test: blockdev write zeroes read split ...passed 00:07:10.112 Test: blockdev write zeroes read split partial ...passed 00:07:10.112 Test: blockdev reset ...[2024-09-30 19:50:54.449313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:10.112 [2024-09-30 19:50:54.451685] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.112 passed 00:07:10.112 Test: blockdev write read 8 blocks ...passed 00:07:10.112 Test: blockdev write read size > 128k ...passed 00:07:10.112 Test: blockdev write read invalid size ...passed 00:07:10.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.112 Test: blockdev write read max offset ...passed 00:07:10.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.112 Test: blockdev writev readv 8 blocks ...passed 00:07:10.112 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.112 Test: blockdev writev readv block ...passed 00:07:10.112 Test: blockdev writev readv size > 128k ...passed 00:07:10.112 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.112 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.458637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c582e000 len:0x1000 00:07:10.112 [2024-09-30 19:50:54.458673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.112 passed 00:07:10.112 Test: blockdev nvme passthru rw ...passed 00:07:10.112 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.112 Test: blockdev nvme admin passthru ...passed 00:07:10.112 Test: blockdev copy ...passed 00:07:10.112 Suite: bdevio tests on: Nvme1n1p1 00:07:10.112 Test: blockdev write read block ...passed 00:07:10.112 Test: blockdev write zeroes read block ...passed 00:07:10.112 Test: blockdev write zeroes read no split ...passed 00:07:10.371 Test: blockdev write zeroes read split ...passed 00:07:10.371 Test: blockdev write zeroes read split partial ...passed 00:07:10.371 Test: blockdev reset ...[2024-09-30 19:50:54.500790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:10.371 [2024-09-30 19:50:54.503211] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.371 passed 00:07:10.371 Test: blockdev write read 8 blocks ...passed 00:07:10.371 Test: blockdev write read size > 128k ...passed 00:07:10.371 Test: blockdev write read invalid size ...passed 00:07:10.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.371 Test: blockdev write read max offset ...passed 00:07:10.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.371 Test: blockdev writev readv 8 blocks ...passed 00:07:10.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.371 Test: blockdev writev readv block ...passed 00:07:10.371 Test: blockdev writev readv size > 128k ...passed 00:07:10.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.371 Test: blockdev comparev and writev ...[2024-09-30 19:50:54.510861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c640e000 len:0x1000 00:07:10.371 [2024-09-30 19:50:54.511005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:10.371 passed 00:07:10.371 Test: blockdev nvme passthru rw ...passed 00:07:10.371 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.371 Test: blockdev nvme admin passthru ...passed 00:07:10.371 Test: blockdev copy ...passed 00:07:10.371 Suite: bdevio tests on: Nvme0n1 00:07:10.371 Test: blockdev write read block ...passed 00:07:10.371 Test: blockdev write zeroes read block ...passed 00:07:10.371 Test: blockdev write zeroes read no split ...passed 00:07:10.371 Test: blockdev write zeroes read split ...passed 00:07:10.371 Test: blockdev write zeroes read split partial ...passed 00:07:10.371 Test: blockdev reset ...[2024-09-30 19:50:54.552479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:10.371 [2024-09-30 19:50:54.554782] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:10.371 passed 00:07:10.371 Test: blockdev write read 8 blocks ...passed 00:07:10.371 Test: blockdev write read size > 128k ...passed 00:07:10.371 Test: blockdev write read invalid size ...passed 00:07:10.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:10.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:10.371 Test: blockdev write read max offset ...passed 00:07:10.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:10.371 Test: blockdev writev readv 8 blocks ...passed 00:07:10.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:10.371 Test: blockdev writev readv block ...passed 00:07:10.371 Test: blockdev writev readv size > 128k ...passed 00:07:10.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:10.371 Test: blockdev comparev and writev ...passed 00:07:10.371 Test: blockdev nvme passthru rw ...[2024-09-30 19:50:54.561153] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:10.371 separate metadata which is not supported yet. 00:07:10.371 passed 00:07:10.371 Test: blockdev nvme passthru vendor specific ...passed 00:07:10.371 Test: blockdev nvme admin passthru ...[2024-09-30 19:50:54.561527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:10.371 [2024-09-30 19:50:54.561566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:10.371 passed 00:07:10.371 Test: blockdev copy ...passed 00:07:10.371 00:07:10.371 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.371 suites 7 7 n/a 0 0 00:07:10.371 tests 161 161 161 0 0 00:07:10.371 asserts 1025 1025 1025 0 n/a 00:07:10.371 00:07:10.371 Elapsed time = 1.021 seconds 00:07:10.371 0 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61749 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61749 ']' 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61749 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61749 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61749' 00:07:10.371 killing process with pid 61749 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61749 00:07:10.371 19:50:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61749 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:11.749 00:07:11.749 real 0m2.563s 00:07:11.749 user 0m6.652s 00:07:11.749 sys 0m0.284s 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 ************************************ 00:07:11.749 END TEST bdev_bounds 00:07:11.749 ************************************ 00:07:11.749 19:50:55 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.749 19:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:11.749 19:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.749 19:50:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 ************************************ 00:07:11.749 START TEST bdev_nbd 00:07:11.749 ************************************ 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:11.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61809 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61809 /var/tmp/spdk-nbd.sock 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61809 ']' 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.749 19:50:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:11.749 [2024-09-30 19:50:55.869970] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:11.749 [2024-09-30 19:50:55.870057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.749 [2024-09-30 19:50:56.007692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.008 [2024-09-30 19:50:56.154076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:12.574 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:12.575 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.831 1+0 records in 00:07:12.831 1+0 records out 00:07:12.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455114 s, 9.0 MB/s 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.831 19:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.831 1+0 records in 00:07:12.831 1+0 records out 00:07:12.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287578 s, 14.2 MB/s 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.831 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.087 1+0 records in 00:07:13.087 1+0 records out 00:07:13.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324929 s, 12.6 MB/s 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.087 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.344 1+0 records in 00:07:13.344 1+0 records out 00:07:13.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462321 s, 8.9 MB/s 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.344 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.601 1+0 records in 00:07:13.601 1+0 records out 00:07:13.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029084 s, 14.1 MB/s 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.601 19:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.859 1+0 records in 00:07:13.859 1+0 records out 00:07:13.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522436 s, 7.8 MB/s 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:13.859 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.117 1+0 records in 00:07:14.117 1+0 records out 00:07:14.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562327 s, 7.3 MB/s 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:14.117 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd0", 00:07:14.375 "bdev_name": "Nvme0n1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd1", 00:07:14.375 "bdev_name": "Nvme1n1p1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd2", 00:07:14.375 "bdev_name": "Nvme1n1p2" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd3", 00:07:14.375 "bdev_name": "Nvme2n1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd4", 00:07:14.375 "bdev_name": "Nvme2n2" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd5", 00:07:14.375 "bdev_name": "Nvme2n3" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd6", 00:07:14.375 "bdev_name": "Nvme3n1" 00:07:14.375 } 00:07:14.375 ]' 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd0", 00:07:14.375 "bdev_name": "Nvme0n1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd1", 00:07:14.375 "bdev_name": "Nvme1n1p1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd2", 00:07:14.375 "bdev_name": "Nvme1n1p2" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd3", 00:07:14.375 "bdev_name": "Nvme2n1" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd4", 00:07:14.375 "bdev_name": "Nvme2n2" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd5", 00:07:14.375 "bdev_name": "Nvme2n3" 00:07:14.375 }, 00:07:14.375 { 00:07:14.375 "nbd_device": "/dev/nbd6", 00:07:14.375 "bdev_name": "Nvme3n1" 00:07:14.375 } 00:07:14.375 ]' 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.375 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.633 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.891 19:50:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.891 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.148 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.405 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:15.662 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:15.662 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.663 19:50:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.920 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:16.178 /dev/nbd0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.178 1+0 records in 00:07:16.178 1+0 records out 00:07:16.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406451 s, 10.1 MB/s 00:07:16.178 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:16.436 /dev/nbd1 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.436 1+0 records in 00:07:16.436 1+0 records out 00:07:16.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366945 s, 11.2 MB/s 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.436 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:16.694 /dev/nbd10 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.694 19:51:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.694 1+0 records in 00:07:16.694 1+0 records out 00:07:16.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371078 s, 11.0 MB/s 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.694 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.695 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:16.953 /dev/nbd11 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.953 1+0 records in 00:07:16.953 1+0 records out 00:07:16.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309234 s, 13.2 MB/s 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.953 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:17.212 /dev/nbd12 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.212 1+0 records in 00:07:17.212 1+0 records out 00:07:17.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473923 s, 8.6 MB/s 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.212 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:17.470 /dev/nbd13 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.470 1+0 records in 00:07:17.470 1+0 records out 00:07:17.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387141 s, 10.6 MB/s 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.470 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:17.728 /dev/nbd14 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.728 1+0 records in 00:07:17.728 1+0 records out 00:07:17.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395812 s, 10.3 MB/s 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.728 19:51:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd0", 00:07:17.987 "bdev_name": "Nvme0n1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd1", 00:07:17.987 "bdev_name": "Nvme1n1p1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd10", 00:07:17.987 "bdev_name": "Nvme1n1p2" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd11", 00:07:17.987 "bdev_name": "Nvme2n1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd12", 00:07:17.987 "bdev_name": "Nvme2n2" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd13", 00:07:17.987 "bdev_name": "Nvme2n3" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd14", 00:07:17.987 "bdev_name": "Nvme3n1" 00:07:17.987 } 00:07:17.987 ]' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd0", 00:07:17.987 "bdev_name": "Nvme0n1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd1", 00:07:17.987 "bdev_name": "Nvme1n1p1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd10", 00:07:17.987 "bdev_name": "Nvme1n1p2" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd11", 00:07:17.987 "bdev_name": "Nvme2n1" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd12", 00:07:17.987 "bdev_name": "Nvme2n2" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd13", 00:07:17.987 "bdev_name": "Nvme2n3" 00:07:17.987 }, 00:07:17.987 { 00:07:17.987 "nbd_device": "/dev/nbd14", 00:07:17.987 "bdev_name": "Nvme3n1" 00:07:17.987 } 00:07:17.987 ]' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.987 /dev/nbd1 00:07:17.987 /dev/nbd10 00:07:17.987 /dev/nbd11 00:07:17.987 /dev/nbd12 00:07:17.987 /dev/nbd13 00:07:17.987 /dev/nbd14' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.987 /dev/nbd1 00:07:17.987 /dev/nbd10 00:07:17.987 /dev/nbd11 00:07:17.987 /dev/nbd12 00:07:17.987 /dev/nbd13 00:07:17.987 /dev/nbd14' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:17.987 256+0 records in 00:07:17.987 256+0 records out 00:07:17.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628925 s, 167 MB/s 00:07:17.987 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.988 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.988 256+0 records in 00:07:17.988 256+0 records out 00:07:17.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0619418 s, 16.9 MB/s 00:07:17.988 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.988 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.988 256+0 records in 00:07:17.988 256+0 records out 00:07:17.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643707 s, 16.3 MB/s 00:07:17.988 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.988 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:18.246 256+0 records in 00:07:18.246 256+0 records out 00:07:18.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609371 s, 17.2 MB/s 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:18.246 256+0 records in 00:07:18.246 256+0 records out 00:07:18.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0596897 s, 17.6 MB/s 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:18.246 256+0 records in 00:07:18.246 256+0 records out 00:07:18.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0604153 s, 17.4 MB/s 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:18.246 256+0 records in 00:07:18.246 256+0 records out 00:07:18.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0595979 s, 17.6 MB/s 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.246 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:18.504 256+0 records in 00:07:18.504 256+0 records out 00:07:18.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0599208 s, 17.5 MB/s 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.504 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.505 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.763 19:51:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.763 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.021 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.280 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.566 19:51:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.824 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:20.083 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:20.340 malloc_lvol_verify 00:07:20.341 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:20.599 aa58bd20-dddd-44c3-868b-13e12fdb8a8e 00:07:20.599 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:20.858 c436b7c5-6ea4-42cb-b6f4-d6fbf0855c6d 00:07:20.858 19:51:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:20.858 /dev/nbd0 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:20.858 mke2fs 1.47.0 (5-Feb-2023) 00:07:20.858 Discarding device blocks: 0/4096 done 00:07:20.858 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:20.858 00:07:20.858 Allocating group tables: 0/1 done 00:07:20.858 Writing inode tables: 0/1 done 00:07:20.858 Creating journal (1024 blocks): done 00:07:20.858 Writing superblocks and filesystem accounting information: 0/1 done 00:07:20.858 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.858 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61809 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61809 ']' 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61809 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61809 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.115 killing process with pid 61809 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61809' 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61809 00:07:21.115 19:51:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61809 00:07:22.049 19:51:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:22.049 00:07:22.049 real 0m10.326s 00:07:22.049 user 0m14.931s 00:07:22.049 sys 0m3.366s 00:07:22.049 19:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.049 19:51:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.049 ************************************ 00:07:22.049 END TEST bdev_nbd 00:07:22.049 ************************************ 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:22.049 skipping fio tests on NVMe due to multi-ns failures. 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.049 19:51:06 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.049 19:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:22.049 19:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.049 19:51:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.049 ************************************ 00:07:22.049 START TEST bdev_verify 00:07:22.049 ************************************ 00:07:22.049 19:51:06 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.049 [2024-09-30 19:51:06.243955] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:22.049 [2024-09-30 19:51:06.244071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62216 ] 00:07:22.049 [2024-09-30 19:51:06.394392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.307 [2024-09-30 19:51:06.538208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.307 [2024-09-30 19:51:06.538300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.872 Running I/O for 5 seconds... 00:07:27.983 24256.00 IOPS, 94.75 MiB/s 22624.00 IOPS, 88.38 MiB/s 22869.33 IOPS, 89.33 MiB/s 22448.00 IOPS, 87.69 MiB/s 22361.60 IOPS, 87.35 MiB/s 00:07:27.983 Latency(us) 00:07:27.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.983 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.983 Verification LBA range: start 0x0 length 0xbd0bd 00:07:27.983 Nvme0n1 : 5.05 1547.01 6.04 0.00 0.00 82521.15 15627.82 73803.62 00:07:27.983 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.983 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:27.983 Nvme0n1 : 5.04 1599.44 6.25 0.00 0.00 79728.13 13208.02 71787.13 00:07:27.983 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x4ff80 00:07:27.984 Nvme1n1p1 : 5.05 1546.55 6.04 0.00 0.00 82440.14 16333.59 70980.53 00:07:27.984 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:27.984 Nvme1n1p1 : 5.07 1601.79 6.26 0.00 0.00 79335.24 15627.82 70577.23 00:07:27.984 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x4ff7f 00:07:27.984 Nvme1n1p2 : 5.05 1546.09 6.04 0.00 0.00 82370.69 17442.66 68560.74 00:07:27.984 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:27.984 Nvme1n1p2 : 5.08 1600.40 6.25 0.00 0.00 79209.30 18551.73 69367.34 00:07:27.984 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x80000 00:07:27.984 Nvme2n1 : 5.05 1545.67 6.04 0.00 0.00 82265.70 19156.68 68157.44 00:07:27.984 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x80000 length 0x80000 00:07:27.984 Nvme2n1 : 5.09 1608.18 6.28 0.00 0.00 78930.46 10687.41 67350.84 00:07:27.984 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x80000 00:07:27.984 Nvme2n2 : 5.05 1545.25 6.04 0.00 0.00 82160.43 18551.73 69770.63 00:07:27.984 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x80000 length 0x80000 00:07:27.984 Nvme2n2 : 5.10 1607.76 6.28 0.00 0.00 78785.38 11090.71 66140.95 00:07:27.984 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x80000 00:07:27.984 Nvme2n3 : 5.07 1553.16 6.07 0.00 0.00 81610.39 6805.66 71383.83 00:07:27.984 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x80000 length 0x80000 00:07:27.984 Nvme2n3 : 5.10 1607.34 6.28 0.00 0.00 78722.56 11040.30 68560.74 00:07:27.984 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x0 length 0x20000 00:07:27.984 Nvme3n1 : 5.08 1562.63 6.10 0.00 0.00 81104.81 6452.78 74206.92 00:07:27.984 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.984 Verification LBA range: start 0x20000 length 0x20000 00:07:27.984 Nvme3n1 : 5.10 1606.92 6.28 0.00 0.00 78673.03 9376.69 72593.72 00:07:27.984 =================================================================================================================== 00:07:27.984 Total : 22078.20 86.24 0.00 0.00 80528.72 6452.78 74206.92 00:07:29.360 00:07:29.360 real 0m7.435s 00:07:29.360 user 0m13.806s 00:07:29.360 sys 0m0.223s 00:07:29.360 19:51:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.360 19:51:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.360 ************************************ 00:07:29.360 END TEST bdev_verify 00:07:29.360 ************************************ 00:07:29.360 19:51:13 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.360 19:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:29.360 19:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.360 19:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.360 ************************************ 00:07:29.360 START TEST bdev_verify_big_io 00:07:29.360 ************************************ 00:07:29.360 19:51:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:29.617 [2024-09-30 19:51:13.734633] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:29.618 [2024-09-30 19:51:13.734723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62314 ] 00:07:29.618 [2024-09-30 19:51:13.877453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.876 [2024-09-30 19:51:14.058812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.876 [2024-09-30 19:51:14.058888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.442 Running I/O for 5 seconds... 00:07:36.788 1998.00 IOPS, 124.88 MiB/s 3227.00 IOPS, 201.69 MiB/s 00:07:36.788 Latency(us) 00:07:36.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.788 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0xbd0b 00:07:36.788 Nvme0n1 : 5.92 98.05 6.13 0.00 0.00 1227964.33 11645.24 1464780.01 00:07:36.788 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:36.788 Nvme0n1 : 5.99 80.15 5.01 0.00 0.00 1522589.70 26214.40 2090699.22 00:07:36.788 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x4ff8 00:07:36.788 Nvme1n1p1 : 6.01 103.78 6.49 0.00 0.00 1124297.87 112116.97 1426063.36 00:07:36.788 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:36.788 Nvme1n1p1 : 5.91 104.05 6.50 0.00 0.00 1139202.74 81062.99 1135688.47 00:07:36.788 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x4ff7 00:07:36.788 Nvme1n1p2 : 6.01 103.58 6.47 0.00 0.00 1102421.20 94775.14 1974549.27 00:07:36.788 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:36.788 Nvme1n1p2 : 5.91 108.25 6.77 0.00 0.00 1075395.66 102841.11 1096971.82 00:07:36.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x8000 00:07:36.788 Nvme2n1 : 6.08 113.32 7.08 0.00 0.00 970068.01 63317.86 1490591.11 00:07:36.788 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x8000 length 0x8000 00:07:36.788 Nvme2n1 : 5.91 108.21 6.76 0.00 0.00 1038320.88 103244.41 1135688.47 00:07:36.788 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x8000 00:07:36.788 Nvme2n2 : 6.15 111.93 7.00 0.00 0.00 942988.01 63721.16 2039077.02 00:07:36.788 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x8000 length 0x8000 00:07:36.788 Nvme2n2 : 6.05 115.92 7.24 0.00 0.00 940903.65 55655.19 1206669.00 00:07:36.788 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x8000 00:07:36.788 Nvme2n3 : 6.19 126.32 7.90 0.00 0.00 806267.76 16232.76 2064888.12 00:07:36.788 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x8000 length 0x8000 00:07:36.788 Nvme2n3 : 6.15 125.38 7.84 0.00 0.00 840852.71 34683.67 1213121.77 00:07:36.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x0 length 0x2000 00:07:36.788 Nvme3n1 : 6.33 220.37 13.77 0.00 0.00 454900.69 322.95 1871304.86 00:07:36.788 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:36.788 Verification LBA range: start 0x2000 length 0x2000 00:07:36.788 Nvme3n1 : 6.15 141.19 8.82 0.00 0.00 727126.93 1127.98 1226027.32 00:07:36.788 =================================================================================================================== 00:07:36.788 Total : 1660.50 103.78 0.00 0.00 931596.40 322.95 2090699.22 00:07:38.688 00:07:38.688 real 0m8.907s 00:07:38.688 user 0m16.795s 00:07:38.688 sys 0m0.232s 00:07:38.688 19:51:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.688 19:51:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.688 ************************************ 00:07:38.688 END TEST bdev_verify_big_io 00:07:38.688 ************************************ 00:07:38.688 19:51:22 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.688 19:51:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:38.688 19:51:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.688 19:51:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.688 ************************************ 00:07:38.688 START TEST bdev_write_zeroes 00:07:38.688 ************************************ 00:07:38.688 19:51:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.688 [2024-09-30 19:51:22.704348] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:38.688 [2024-09-30 19:51:22.704440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62429 ] 00:07:38.688 [2024-09-30 19:51:22.847254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.688 [2024-09-30 19:51:22.992063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.262 Running I/O for 1 seconds... 00:07:40.204 70336.00 IOPS, 274.75 MiB/s 00:07:40.204 Latency(us) 00:07:40.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.204 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme0n1 : 1.02 10004.96 39.08 0.00 0.00 12766.05 10939.47 25004.50 00:07:40.204 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme1n1p1 : 1.02 9992.55 39.03 0.00 0.00 12763.88 10737.82 24399.56 00:07:40.204 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme1n1p2 : 1.03 9980.22 38.99 0.00 0.00 12752.15 10637.00 23794.61 00:07:40.204 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme2n1 : 1.03 9968.90 38.94 0.00 0.00 12721.85 10989.88 22584.71 00:07:40.204 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme2n2 : 1.03 9957.70 38.90 0.00 0.00 12695.40 10939.47 21979.77 00:07:40.204 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme2n3 : 1.03 9946.42 38.85 0.00 0.00 12670.58 10737.82 22584.71 00:07:40.204 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.204 Nvme3n1 : 1.03 9935.25 38.81 0.00 0.00 12659.00 9779.99 24097.08 00:07:40.204 =================================================================================================================== 00:07:40.204 Total : 69786.00 272.60 0.00 0.00 12718.42 9779.99 25004.50 00:07:41.139 00:07:41.139 real 0m2.744s 00:07:41.139 user 0m2.466s 00:07:41.139 sys 0m0.163s 00:07:41.139 19:51:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.139 19:51:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:41.139 ************************************ 00:07:41.139 END TEST bdev_write_zeroes 00:07:41.139 ************************************ 00:07:41.139 19:51:25 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.139 19:51:25 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:41.139 19:51:25 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.139 19:51:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.139 ************************************ 00:07:41.139 START TEST bdev_json_nonenclosed 00:07:41.139 ************************************ 00:07:41.139 19:51:25 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.139 [2024-09-30 19:51:25.495633] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:41.139 [2024-09-30 19:51:25.495749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62482 ] 00:07:41.397 [2024-09-30 19:51:25.647058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.655 [2024-09-30 19:51:25.823563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.655 [2024-09-30 19:51:25.823640] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:41.655 [2024-09-30 19:51:25.823656] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.655 [2024-09-30 19:51:25.823665] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.914 00:07:41.914 real 0m0.674s 00:07:41.914 user 0m0.470s 00:07:41.914 sys 0m0.099s 00:07:41.914 19:51:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.914 19:51:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:41.914 ************************************ 00:07:41.914 END TEST bdev_json_nonenclosed 00:07:41.914 ************************************ 00:07:41.914 19:51:26 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.914 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:41.914 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.914 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.914 ************************************ 00:07:41.914 START TEST bdev_json_nonarray 00:07:41.914 ************************************ 00:07:41.914 19:51:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.914 [2024-09-30 19:51:26.207462] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:41.914 [2024-09-30 19:51:26.207577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62512 ] 00:07:42.173 [2024-09-30 19:51:26.356930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.173 [2024-09-30 19:51:26.532507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.173 [2024-09-30 19:51:26.532589] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:42.173 [2024-09-30 19:51:26.532606] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:42.173 [2024-09-30 19:51:26.532616] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.740 00:07:42.740 real 0m0.669s 00:07:42.740 user 0m0.464s 00:07:42.740 sys 0m0.100s 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:42.740 ************************************ 00:07:42.740 END TEST bdev_json_nonarray 00:07:42.740 ************************************ 00:07:42.740 19:51:26 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:42.740 19:51:26 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:42.740 19:51:26 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:42.740 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.740 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.740 19:51:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.740 ************************************ 00:07:42.740 START TEST bdev_gpt_uuid 00:07:42.740 ************************************ 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62533 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62533 00:07:42.740 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62533 ']' 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:42.741 19:51:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.741 [2024-09-30 19:51:26.935076] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:42.741 [2024-09-30 19:51:26.935201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62533 ] 00:07:42.741 [2024-09-30 19:51:27.080118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.999 [2024-09-30 19:51:27.259079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.564 19:51:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.564 19:51:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:43.564 19:51:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.564 19:51:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.564 19:51:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.822 Some configs were skipped because the RPC state that can call them passed over. 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.822 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:44.080 { 00:07:44.080 "name": "Nvme1n1p1", 00:07:44.080 "aliases": [ 00:07:44.080 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:44.080 ], 00:07:44.080 "product_name": "GPT Disk", 00:07:44.080 "block_size": 4096, 00:07:44.080 "num_blocks": 655104, 00:07:44.080 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.080 "assigned_rate_limits": { 00:07:44.080 "rw_ios_per_sec": 0, 00:07:44.080 "rw_mbytes_per_sec": 0, 00:07:44.080 "r_mbytes_per_sec": 0, 00:07:44.080 "w_mbytes_per_sec": 0 00:07:44.080 }, 00:07:44.080 "claimed": false, 00:07:44.080 "zoned": false, 00:07:44.080 "supported_io_types": { 00:07:44.080 "read": true, 00:07:44.080 "write": true, 00:07:44.080 "unmap": true, 00:07:44.080 "flush": true, 00:07:44.080 "reset": true, 00:07:44.080 "nvme_admin": false, 00:07:44.080 "nvme_io": false, 00:07:44.080 "nvme_io_md": false, 00:07:44.080 "write_zeroes": true, 00:07:44.080 "zcopy": false, 00:07:44.080 "get_zone_info": false, 00:07:44.080 "zone_management": false, 00:07:44.080 "zone_append": false, 00:07:44.080 "compare": true, 00:07:44.080 "compare_and_write": false, 00:07:44.080 "abort": true, 00:07:44.080 "seek_hole": false, 00:07:44.080 "seek_data": false, 00:07:44.080 "copy": true, 00:07:44.080 "nvme_iov_md": false 00:07:44.080 }, 00:07:44.080 "driver_specific": { 00:07:44.080 "gpt": { 00:07:44.080 "base_bdev": "Nvme1n1", 00:07:44.080 "offset_blocks": 256, 00:07:44.080 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:44.080 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.080 "partition_name": "SPDK_TEST_first" 00:07:44.080 } 00:07:44.080 } 00:07:44.080 } 00:07:44.080 ]' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:44.080 { 00:07:44.080 "name": "Nvme1n1p2", 00:07:44.080 "aliases": [ 00:07:44.080 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:44.080 ], 00:07:44.080 "product_name": "GPT Disk", 00:07:44.080 "block_size": 4096, 00:07:44.080 "num_blocks": 655103, 00:07:44.080 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:44.080 "assigned_rate_limits": { 00:07:44.080 "rw_ios_per_sec": 0, 00:07:44.080 "rw_mbytes_per_sec": 0, 00:07:44.080 "r_mbytes_per_sec": 0, 00:07:44.080 "w_mbytes_per_sec": 0 00:07:44.080 }, 00:07:44.080 "claimed": false, 00:07:44.080 "zoned": false, 00:07:44.080 "supported_io_types": { 00:07:44.080 "read": true, 00:07:44.080 "write": true, 00:07:44.080 "unmap": true, 00:07:44.080 "flush": true, 00:07:44.080 "reset": true, 00:07:44.080 "nvme_admin": false, 00:07:44.080 "nvme_io": false, 00:07:44.080 "nvme_io_md": false, 00:07:44.080 "write_zeroes": true, 00:07:44.080 "zcopy": false, 00:07:44.080 "get_zone_info": false, 00:07:44.080 "zone_management": false, 00:07:44.080 "zone_append": false, 00:07:44.080 "compare": true, 00:07:44.080 "compare_and_write": false, 00:07:44.080 "abort": true, 00:07:44.080 "seek_hole": false, 00:07:44.080 "seek_data": false, 00:07:44.080 "copy": true, 00:07:44.080 "nvme_iov_md": false 00:07:44.080 }, 00:07:44.080 "driver_specific": { 00:07:44.080 "gpt": { 00:07:44.080 "base_bdev": "Nvme1n1", 00:07:44.080 "offset_blocks": 655360, 00:07:44.080 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:44.080 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:44.080 "partition_name": "SPDK_TEST_second" 00:07:44.080 } 00:07:44.080 } 00:07:44.080 } 00:07:44.080 ]' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62533 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62533 ']' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62533 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62533 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.080 killing process with pid 62533 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62533' 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62533 00:07:44.080 19:51:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62533 00:07:45.978 00:07:45.978 real 0m3.102s 00:07:45.978 user 0m3.219s 00:07:45.978 sys 0m0.370s 00:07:45.978 19:51:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.978 19:51:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 ************************************ 00:07:45.978 END TEST bdev_gpt_uuid 00:07:45.978 ************************************ 00:07:45.978 19:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:45.978 19:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.978 19:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.978 19:51:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.978 19:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.978 19:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:45.978 19:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:45.978 19:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:45.978 19:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:45.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.237 Waiting for block devices as requested 00:07:46.237 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.237 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.237 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.496 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.758 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:51.758 19:51:35 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:51.758 19:51:35 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:51.758 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.758 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:51.758 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:51.758 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:51.758 19:51:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:51.758 00:07:51.758 real 0m56.029s 00:07:51.758 user 1m12.601s 00:07:51.758 sys 0m7.364s 00:07:51.758 19:51:36 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.758 19:51:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.758 ************************************ 00:07:51.758 END TEST blockdev_nvme_gpt 00:07:51.758 ************************************ 00:07:51.758 19:51:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:51.758 19:51:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.758 19:51:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.758 19:51:36 -- common/autotest_common.sh@10 -- # set +x 00:07:51.758 ************************************ 00:07:51.758 START TEST nvme 00:07:51.758 ************************************ 00:07:51.758 19:51:36 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.017 * Looking for test storage... 00:07:52.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.017 19:51:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.017 19:51:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.017 19:51:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.017 19:51:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.017 19:51:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.017 19:51:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:52.017 19:51:36 nvme -- scripts/common.sh@345 -- # : 1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.017 19:51:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.017 19:51:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@353 -- # local d=1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.017 19:51:36 nvme -- scripts/common.sh@355 -- # echo 1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.017 19:51:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@353 -- # local d=2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.017 19:51:36 nvme -- scripts/common.sh@355 -- # echo 2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.017 19:51:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.017 19:51:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.017 19:51:36 nvme -- scripts/common.sh@368 -- # return 0 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.017 --rc genhtml_branch_coverage=1 00:07:52.017 --rc genhtml_function_coverage=1 00:07:52.017 --rc genhtml_legend=1 00:07:52.017 --rc geninfo_all_blocks=1 00:07:52.017 --rc geninfo_unexecuted_blocks=1 00:07:52.017 00:07:52.017 ' 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.017 --rc genhtml_branch_coverage=1 00:07:52.017 --rc genhtml_function_coverage=1 00:07:52.017 --rc genhtml_legend=1 00:07:52.017 --rc geninfo_all_blocks=1 00:07:52.017 --rc geninfo_unexecuted_blocks=1 00:07:52.017 00:07:52.017 ' 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.017 --rc genhtml_branch_coverage=1 00:07:52.017 --rc genhtml_function_coverage=1 00:07:52.017 --rc genhtml_legend=1 00:07:52.017 --rc geninfo_all_blocks=1 00:07:52.017 --rc geninfo_unexecuted_blocks=1 00:07:52.017 00:07:52.017 ' 00:07:52.017 19:51:36 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:52.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.017 --rc genhtml_branch_coverage=1 00:07:52.017 --rc genhtml_function_coverage=1 00:07:52.017 --rc genhtml_legend=1 00:07:52.017 --rc geninfo_all_blocks=1 00:07:52.017 --rc geninfo_unexecuted_blocks=1 00:07:52.017 00:07:52.017 ' 00:07:52.017 19:51:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:52.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.848 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.848 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.848 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.848 19:51:37 nvme -- nvme/nvme.sh@79 -- # uname 00:07:53.105 19:51:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:53.105 19:51:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:53.105 19:51:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1071 -- # stubpid=63169 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:53.106 Waiting for stub to ready for secondary processes... 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63169 ]] 00:07:53.106 19:51:37 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:53.106 [2024-09-30 19:51:37.246335] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:53.106 [2024-09-30 19:51:37.246432] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:53.671 [2024-09-30 19:51:37.944063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.930 [2024-09-30 19:51:38.113195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.930 [2024-09-30 19:51:38.113293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.930 [2024-09-30 19:51:38.113334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.930 [2024-09-30 19:51:38.127051] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:53.930 [2024-09-30 19:51:38.127086] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.930 [2024-09-30 19:51:38.140338] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:53.930 [2024-09-30 19:51:38.140420] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:53.930 [2024-09-30 19:51:38.141865] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.930 [2024-09-30 19:51:38.142001] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:53.930 [2024-09-30 19:51:38.142045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:53.930 [2024-09-30 19:51:38.143843] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.930 [2024-09-30 19:51:38.143998] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:53.930 [2024-09-30 19:51:38.144045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:53.930 [2024-09-30 19:51:38.145566] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:53.930 [2024-09-30 19:51:38.145682] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:53.930 [2024-09-30 19:51:38.145726] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:53.930 [2024-09-30 19:51:38.145761] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:53.930 [2024-09-30 19:51:38.145794] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:53.930 done. 00:07:53.930 19:51:38 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.930 19:51:38 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:53.930 19:51:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:53.930 19:51:38 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:53.930 19:51:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.930 19:51:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.930 ************************************ 00:07:53.930 START TEST nvme_reset 00:07:53.930 ************************************ 00:07:53.930 19:51:38 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:54.189 Initializing NVMe Controllers 00:07:54.189 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:54.189 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:54.189 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:54.189 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:54.189 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:54.189 ************************************ 00:07:54.189 END TEST nvme_reset 00:07:54.189 ************************************ 00:07:54.189 00:07:54.189 real 0m0.212s 00:07:54.189 user 0m0.056s 00:07:54.189 sys 0m0.104s 00:07:54.189 19:51:38 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.189 19:51:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:54.189 19:51:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:54.189 19:51:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.189 19:51:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.189 19:51:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.189 ************************************ 00:07:54.189 START TEST nvme_identify 00:07:54.189 ************************************ 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:54.189 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:54.189 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:54.189 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:54.189 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:54.189 19:51:38 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:54.189 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:54.450 [2024-09-30 19:51:38.717157] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63190 terminated unexpected 00:07:54.450 ===================================================== 00:07:54.450 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.450 ===================================================== 00:07:54.450 Controller Capabilities/Features 00:07:54.450 ================================ 00:07:54.450 Vendor ID: 1b36 00:07:54.450 Subsystem Vendor ID: 1af4 00:07:54.450 Serial Number: 12340 00:07:54.450 Model Number: QEMU NVMe Ctrl 00:07:54.450 Firmware Version: 8.0.0 00:07:54.450 Recommended Arb Burst: 6 00:07:54.450 IEEE OUI Identifier: 00 54 52 00:07:54.450 Multi-path I/O 00:07:54.450 May have multiple subsystem ports: No 00:07:54.450 May have multiple controllers: No 00:07:54.450 Associated with SR-IOV VF: No 00:07:54.450 Max Data Transfer Size: 524288 00:07:54.450 Max Number of Namespaces: 256 00:07:54.450 Max Number of I/O Queues: 64 00:07:54.450 NVMe Specification Version (VS): 1.4 00:07:54.450 NVMe Specification Version (Identify): 1.4 00:07:54.450 Maximum Queue Entries: 2048 00:07:54.450 Contiguous Queues Required: Yes 00:07:54.450 Arbitration Mechanisms Supported 00:07:54.450 Weighted Round Robin: Not Supported 00:07:54.450 Vendor Specific: Not Supported 00:07:54.450 Reset Timeout: 7500 ms 00:07:54.450 Doorbell Stride: 4 bytes 00:07:54.450 NVM Subsystem Reset: Not Supported 00:07:54.450 Command Sets Supported 00:07:54.450 NVM Command Set: Supported 00:07:54.450 Boot Partition: Not Supported 00:07:54.450 Memory Page Size Minimum: 4096 bytes 00:07:54.450 Memory Page Size Maximum: 65536 bytes 00:07:54.450 Persistent Memory Region: Not Supported 00:07:54.450 Optional Asynchronous Events Supported 00:07:54.450 Namespace Attribute Notices: Supported 00:07:54.450 Firmware Activation Notices: Not Supported 00:07:54.450 ANA Change Notices: Not Supported 00:07:54.450 PLE Aggregate Log Change Notices: Not Supported 00:07:54.450 LBA Status Info Alert Notices: Not Supported 00:07:54.450 EGE Aggregate Log Change Notices: Not Supported 00:07:54.450 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.450 Zone Descriptor Change Notices: Not Supported 00:07:54.450 Discovery Log Change Notices: Not Supported 00:07:54.450 Controller Attributes 00:07:54.450 128-bit Host Identifier: Not Supported 00:07:54.450 Non-Operational Permissive Mode: Not Supported 00:07:54.450 NVM Sets: Not Supported 00:07:54.451 Read Recovery Levels: Not Supported 00:07:54.451 Endurance Groups: Not Supported 00:07:54.451 Predictable Latency Mode: Not Supported 00:07:54.451 Traffic Based Keep ALive: Not Supported 00:07:54.451 Namespace Granularity: Not Supported 00:07:54.451 SQ Associations: Not Supported 00:07:54.451 UUID List: Not Supported 00:07:54.451 Multi-Domain Subsystem: Not Supported 00:07:54.451 Fixed Capacity Management: Not Supported 00:07:54.451 Variable Capacity Management: Not Supported 00:07:54.451 Delete Endurance Group: Not Supported 00:07:54.451 Delete NVM Set: Not Supported 00:07:54.451 Extended LBA Formats Supported: Supported 00:07:54.451 Flexible Data Placement Supported: Not Supported 00:07:54.451 00:07:54.451 Controller Memory Buffer Support 00:07:54.451 ================================ 00:07:54.451 Supported: No 00:07:54.451 00:07:54.451 Persistent Memory Region Support 00:07:54.451 ================================ 00:07:54.451 Supported: No 00:07:54.451 00:07:54.451 Admin Command Set Attributes 00:07:54.451 ============================ 00:07:54.451 Security Send/Receive: Not Supported 00:07:54.451 Format NVM: Supported 00:07:54.451 Firmware Activate/Download: Not Supported 00:07:54.451 Namespace Management: Supported 00:07:54.451 Device Self-Test: Not Supported 00:07:54.451 Directives: Supported 00:07:54.451 NVMe-MI: Not Supported 00:07:54.451 Virtualization Management: Not Supported 00:07:54.451 Doorbell Buffer Config: Supported 00:07:54.451 Get LBA Status Capability: Not Supported 00:07:54.451 Command & Feature Lockdown Capability: Not Supported 00:07:54.451 Abort Command Limit: 4 00:07:54.451 Async Event Request Limit: 4 00:07:54.451 Number of Firmware Slots: N/A 00:07:54.451 Firmware Slot 1 Read-Only: N/A 00:07:54.451 Firmware Activation Without Reset: N/A 00:07:54.451 Multiple Update Detection Support: N/A 00:07:54.451 Firmware Update Granularity: No Information Provided 00:07:54.451 Per-Namespace SMART Log: Yes 00:07:54.451 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.451 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:54.451 Command Effects Log Page: Supported 00:07:54.451 Get Log Page Extended Data: Supported 00:07:54.451 Telemetry Log Pages: Not Supported 00:07:54.451 Persistent Event Log Pages: Not Supported 00:07:54.451 Supported Log Pages Log Page: May Support 00:07:54.451 Commands Supported & Effects Log Page: Not Supported 00:07:54.451 Feature Identifiers & Effects Log Page:May Support 00:07:54.451 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.451 Data Area 4 for Telemetry Log: Not Supported 00:07:54.451 Error Log Page Entries Supported: 1 00:07:54.451 Keep Alive: Not Supported 00:07:54.451 00:07:54.451 NVM Command Set Attributes 00:07:54.451 ========================== 00:07:54.451 Submission Queue Entry Size 00:07:54.451 Max: 64 00:07:54.451 Min: 64 00:07:54.451 Completion Queue Entry Size 00:07:54.451 Max: 16 00:07:54.451 Min: 16 00:07:54.451 Number of Namespaces: 256 00:07:54.451 Compare Command: Supported 00:07:54.451 Write Uncorrectable Command: Not Supported 00:07:54.451 Dataset Management Command: Supported 00:07:54.451 Write Zeroes Command: Supported 00:07:54.451 Set Features Save Field: Supported 00:07:54.451 Reservations: Not Supported 00:07:54.451 Timestamp: Supported 00:07:54.451 Copy: Supported 00:07:54.451 Volatile Write Cache: Present 00:07:54.451 Atomic Write Unit (Normal): 1 00:07:54.451 Atomic Write Unit (PFail): 1 00:07:54.451 Atomic Compare & Write Unit: 1 00:07:54.451 Fused Compare & Write: Not Supported 00:07:54.451 Scatter-Gather List 00:07:54.451 SGL Command Set: Supported 00:07:54.451 SGL Keyed: Not Supported 00:07:54.451 SGL Bit Bucket Descriptor: Not Supported 00:07:54.451 SGL Metadata Pointer: Not Supported 00:07:54.451 Oversized SGL: Not Supported 00:07:54.451 SGL Metadata Address: Not Supported 00:07:54.451 SGL Offset: Not Supported 00:07:54.451 Transport SGL Data Block: Not Supported 00:07:54.451 Replay Protected Memory Block: Not Supported 00:07:54.451 00:07:54.451 Firmware Slot Information 00:07:54.451 ========================= 00:07:54.451 Active slot: 1 00:07:54.451 Slot 1 Firmware Revision: 1.0 00:07:54.451 00:07:54.451 00:07:54.451 Commands Supported and Effects 00:07:54.451 ============================== 00:07:54.451 Admin Commands 00:07:54.451 -------------- 00:07:54.451 Delete I/O Submission Queue (00h): Supported 00:07:54.451 Create I/O Submission Queue (01h): Supported 00:07:54.451 Get Log Page (02h): Supported 00:07:54.451 Delete I/O Completion Queue (04h): Supported 00:07:54.451 Create I/O Completion Queue (05h): Supported 00:07:54.451 Identify (06h): Supported 00:07:54.451 Abort (08h): Supported 00:07:54.451 Set Features (09h): Supported 00:07:54.451 Get Features (0Ah): Supported 00:07:54.451 Asynchronous Event Request (0Ch): Supported 00:07:54.451 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.451 Directive Send (19h): Supported 00:07:54.451 Directive Receive (1Ah): Supported 00:07:54.451 Virtualization Management (1Ch): Supported 00:07:54.451 Doorbell Buffer Config (7Ch): Supported 00:07:54.451 Format NVM (80h): Supported LBA-Change 00:07:54.451 I/O Commands 00:07:54.451 ------------ 00:07:54.451 Flush (00h): Supported LBA-Change 00:07:54.451 Write (01h): Supported LBA-Change 00:07:54.451 Read (02h): Supported 00:07:54.451 Compare (05h): Supported 00:07:54.451 Write Zeroes (08h): Supported LBA-Change 00:07:54.451 Dataset Management (09h): Supported LBA-Change 00:07:54.451 Unknown (0Ch): Supported 00:07:54.451 Unknown (12h): Supported 00:07:54.451 Copy (19h): Supported LBA-Change 00:07:54.451 Unknown (1Dh): Supported LBA-Change 00:07:54.451 00:07:54.451 Error Log 00:07:54.451 ========= 00:07:54.451 00:07:54.451 Arbitration 00:07:54.451 =========== 00:07:54.451 Arbitration Burst: no limit 00:07:54.451 00:07:54.451 Power Management 00:07:54.451 ================ 00:07:54.451 Number of Power States: 1 00:07:54.451 Current Power State: Power State #0 00:07:54.451 Power State #0: 00:07:54.451 Max Power: 25.00 W 00:07:54.451 Non-Operational State: Operational 00:07:54.451 Entry Latency: 16 microseconds 00:07:54.451 Exit Latency: 4 microseconds 00:07:54.451 Relative Read Throughput: 0 00:07:54.451 Relative Read Latency: 0 00:07:54.452 Relative Write Throughput: 0 00:07:54.452 Relative Write Latency: 0 00:07:54.452 Idle Power[2024-09-30 19:51:38.718361] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63190 terminated unexpected 00:07:54.452 : Not Reported 00:07:54.452 Active Power: Not Reported 00:07:54.452 Non-Operational Permissive Mode: Not Supported 00:07:54.452 00:07:54.452 Health Information 00:07:54.452 ================== 00:07:54.452 Critical Warnings: 00:07:54.452 Available Spare Space: OK 00:07:54.452 Temperature: OK 00:07:54.452 Device Reliability: OK 00:07:54.452 Read Only: No 00:07:54.452 Volatile Memory Backup: OK 00:07:54.452 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.452 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.452 Available Spare: 0% 00:07:54.452 Available Spare Threshold: 0% 00:07:54.452 Life Percentage Used: 0% 00:07:54.452 Data Units Read: 679 00:07:54.452 Data Units Written: 607 00:07:54.452 Host Read Commands: 38082 00:07:54.452 Host Write Commands: 37868 00:07:54.452 Controller Busy Time: 0 minutes 00:07:54.452 Power Cycles: 0 00:07:54.452 Power On Hours: 0 hours 00:07:54.452 Unsafe Shutdowns: 0 00:07:54.452 Unrecoverable Media Errors: 0 00:07:54.452 Lifetime Error Log Entries: 0 00:07:54.452 Warning Temperature Time: 0 minutes 00:07:54.452 Critical Temperature Time: 0 minutes 00:07:54.452 00:07:54.452 Number of Queues 00:07:54.452 ================ 00:07:54.452 Number of I/O Submission Queues: 64 00:07:54.452 Number of I/O Completion Queues: 64 00:07:54.452 00:07:54.452 ZNS Specific Controller Data 00:07:54.452 ============================ 00:07:54.452 Zone Append Size Limit: 0 00:07:54.452 00:07:54.452 00:07:54.452 Active Namespaces 00:07:54.452 ================= 00:07:54.452 Namespace ID:1 00:07:54.452 Error Recovery Timeout: Unlimited 00:07:54.452 Command Set Identifier: NVM (00h) 00:07:54.452 Deallocate: Supported 00:07:54.452 Deallocated/Unwritten Error: Supported 00:07:54.452 Deallocated Read Value: All 0x00 00:07:54.452 Deallocate in Write Zeroes: Not Supported 00:07:54.452 Deallocated Guard Field: 0xFFFF 00:07:54.452 Flush: Supported 00:07:54.452 Reservation: Not Supported 00:07:54.452 Metadata Transferred as: Separate Metadata Buffer 00:07:54.452 Namespace Sharing Capabilities: Private 00:07:54.452 Size (in LBAs): 1548666 (5GiB) 00:07:54.452 Capacity (in LBAs): 1548666 (5GiB) 00:07:54.452 Utilization (in LBAs): 1548666 (5GiB) 00:07:54.452 Thin Provisioning: Not Supported 00:07:54.452 Per-NS Atomic Units: No 00:07:54.452 Maximum Single Source Range Length: 128 00:07:54.452 Maximum Copy Length: 128 00:07:54.452 Maximum Source Range Count: 128 00:07:54.452 NGUID/EUI64 Never Reused: No 00:07:54.452 Namespace Write Protected: No 00:07:54.452 Number of LBA Formats: 8 00:07:54.452 Current LBA Format: LBA Format #07 00:07:54.452 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.452 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.452 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.452 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.452 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.452 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.452 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.452 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.452 00:07:54.452 NVM Specific Namespace Data 00:07:54.452 =========================== 00:07:54.452 Logical Block Storage Tag Mask: 0 00:07:54.452 Protection Information Capabilities: 00:07:54.452 16b Guard Protection Information Storage Tag Support: No 00:07:54.452 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.452 Storage Tag Check Read Support: No 00:07:54.452 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.452 ===================================================== 00:07:54.452 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.452 ===================================================== 00:07:54.452 Controller Capabilities/Features 00:07:54.452 ================================ 00:07:54.452 Vendor ID: 1b36 00:07:54.452 Subsystem Vendor ID: 1af4 00:07:54.452 Serial Number: 12341 00:07:54.452 Model Number: QEMU NVMe Ctrl 00:07:54.452 Firmware Version: 8.0.0 00:07:54.452 Recommended Arb Burst: 6 00:07:54.452 IEEE OUI Identifier: 00 54 52 00:07:54.452 Multi-path I/O 00:07:54.452 May have multiple subsystem ports: No 00:07:54.452 May have multiple controllers: No 00:07:54.452 Associated with SR-IOV VF: No 00:07:54.452 Max Data Transfer Size: 524288 00:07:54.452 Max Number of Namespaces: 256 00:07:54.452 Max Number of I/O Queues: 64 00:07:54.452 NVMe Specification Version (VS): 1.4 00:07:54.452 NVMe Specification Version (Identify): 1.4 00:07:54.452 Maximum Queue Entries: 2048 00:07:54.452 Contiguous Queues Required: Yes 00:07:54.452 Arbitration Mechanisms Supported 00:07:54.452 Weighted Round Robin: Not Supported 00:07:54.452 Vendor Specific: Not Supported 00:07:54.452 Reset Timeout: 7500 ms 00:07:54.452 Doorbell Stride: 4 bytes 00:07:54.452 NVM Subsystem Reset: Not Supported 00:07:54.452 Command Sets Supported 00:07:54.452 NVM Command Set: Supported 00:07:54.452 Boot Partition: Not Supported 00:07:54.452 Memory Page Size Minimum: 4096 bytes 00:07:54.452 Memory Page Size Maximum: 65536 bytes 00:07:54.452 Persistent Memory Region: Not Supported 00:07:54.452 Optional Asynchronous Events Supported 00:07:54.452 Namespace Attribute Notices: Supported 00:07:54.452 Firmware Activation Notices: Not Supported 00:07:54.452 ANA Change Notices: Not Supported 00:07:54.452 PLE Aggregate Log Change Notices: Not Supported 00:07:54.452 LBA Status Info Alert Notices: Not Supported 00:07:54.452 EGE Aggregate Log Change Notices: Not Supported 00:07:54.452 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.452 Zone Descriptor Change Notices: Not Supported 00:07:54.452 Discovery Log Change Notices: Not Supported 00:07:54.452 Controller Attributes 00:07:54.452 128-bit Host Identifier: Not Supported 00:07:54.452 Non-Operational Permissive Mode: Not Supported 00:07:54.452 NVM Sets: Not Supported 00:07:54.452 Read Recovery Levels: Not Supported 00:07:54.452 Endurance Groups: Not Supported 00:07:54.452 Predictable Latency Mode: Not Supported 00:07:54.452 Traffic Based Keep ALive: Not Supported 00:07:54.452 Namespace Granularity: Not Supported 00:07:54.452 SQ Associations: Not Supported 00:07:54.452 UUID List: Not Supported 00:07:54.452 Multi-Domain Subsystem: Not Supported 00:07:54.452 Fixed Capacity Management: Not Supported 00:07:54.452 Variable Capacity Management: Not Supported 00:07:54.452 Delete Endurance Group: Not Supported 00:07:54.452 Delete NVM Set: Not Supported 00:07:54.452 Extended LBA Formats Supported: Supported 00:07:54.452 Flexible Data Placement Supported: Not Supported 00:07:54.452 00:07:54.452 Controller Memory Buffer Support 00:07:54.452 ================================ 00:07:54.452 Supported: No 00:07:54.452 00:07:54.452 Persistent Memory Region Support 00:07:54.452 ================================ 00:07:54.452 Supported: No 00:07:54.452 00:07:54.452 Admin Command Set Attributes 00:07:54.452 ============================ 00:07:54.452 Security Send/Receive: Not Supported 00:07:54.452 Format NVM: Supported 00:07:54.453 Firmware Activate/Download: Not Supported 00:07:54.453 Namespace Management: Supported 00:07:54.453 Device Self-Test: Not Supported 00:07:54.453 Directives: Supported 00:07:54.453 NVMe-MI: Not Supported 00:07:54.453 Virtualization Management: Not Supported 00:07:54.453 Doorbell Buffer Config: Supported 00:07:54.453 Get LBA Status Capability: Not Supported 00:07:54.453 Command & Feature Lockdown Capability: Not Supported 00:07:54.453 Abort Command Limit: 4 00:07:54.453 Async Event Request Limit: 4 00:07:54.453 Number of Firmware Slots: N/A 00:07:54.453 Firmware Slot 1 Read-Only: N/A 00:07:54.453 Firmware Activation Without Reset: N/A 00:07:54.453 Multiple Update Detection Support: N/A 00:07:54.453 Firmware Update Granularity: No Information Provided 00:07:54.453 Per-Namespace SMART Log: Yes 00:07:54.453 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.453 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:54.453 Command Effects Log Page: Supported 00:07:54.453 Get Log Page Extended Data: Supported 00:07:54.453 Telemetry Log Pages: Not Supported 00:07:54.453 Persistent Event Log Pages: Not Supported 00:07:54.453 Supported Log Pages Log Page: May Support 00:07:54.453 Commands Supported & Effects Log Page: Not Supported 00:07:54.453 Feature Identifiers & Effects Log Page:May Support 00:07:54.453 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.453 Data Area 4 for Telemetry Log: Not Supported 00:07:54.453 Error Log Page Entries Supported: 1 00:07:54.453 Keep Alive: Not Supported 00:07:54.453 00:07:54.453 NVM Command Set Attributes 00:07:54.453 ========================== 00:07:54.453 Submission Queue Entry Size 00:07:54.453 Max: 64 00:07:54.453 Min: 64 00:07:54.453 Completion Queue Entry Size 00:07:54.453 Max: 16 00:07:54.453 Min: 16 00:07:54.453 Number of Namespaces: 256 00:07:54.453 Compare Command: Supported 00:07:54.453 Write Uncorrectable Command: Not Supported 00:07:54.453 Dataset Management Command: Supported 00:07:54.453 Write Zeroes Command: Supported 00:07:54.453 Set Features Save Field: Supported 00:07:54.453 Reservations: Not Supported 00:07:54.453 Timestamp: Supported 00:07:54.453 Copy: Supported 00:07:54.453 Volatile Write Cache: Present 00:07:54.453 Atomic Write Unit (Normal): 1 00:07:54.453 Atomic Write Unit (PFail): 1 00:07:54.453 Atomic Compare & Write Unit: 1 00:07:54.453 Fused Compare & Write: Not Supported 00:07:54.453 Scatter-Gather List 00:07:54.453 SGL Command Set: Supported 00:07:54.453 SGL Keyed: Not Supported 00:07:54.453 SGL Bit Bucket Descriptor: Not Supported 00:07:54.453 SGL Metadata Pointer: Not Supported 00:07:54.453 Oversized SGL: Not Supported 00:07:54.453 SGL Metadata Address: Not Supported 00:07:54.453 SGL Offset: Not Supported 00:07:54.453 Transport SGL Data Block: Not Supported 00:07:54.453 Replay Protected Memory Block: Not Supported 00:07:54.453 00:07:54.453 Firmware Slot Information 00:07:54.453 ========================= 00:07:54.453 Active slot: 1 00:07:54.453 Slot 1 Firmware Revision: 1.0 00:07:54.453 00:07:54.453 00:07:54.453 Commands Supported and Effects 00:07:54.453 ============================== 00:07:54.453 Admin Commands 00:07:54.453 -------------- 00:07:54.453 Delete I/O Submission Queue (00h): Supported 00:07:54.453 Create I/O Submission Queue (01h): Supported 00:07:54.453 Get Log Page (02h): Supported 00:07:54.453 Delete I/O Completion Queue (04h): Supported 00:07:54.453 Create I/O Completion Queue (05h): Supported 00:07:54.453 Identify (06h): Supported 00:07:54.453 Abort (08h): Supported 00:07:54.453 Set Features (09h): Supported 00:07:54.453 Get Features (0Ah): Supported 00:07:54.453 Asynchronous Event Request (0Ch): Supported 00:07:54.453 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.453 Directive Send (19h): Supported 00:07:54.453 Directive Receive (1Ah): Supported 00:07:54.453 Virtualization Management (1Ch): Supported 00:07:54.453 Doorbell Buffer Config (7Ch): Supported 00:07:54.453 Format NVM (80h): Supported LBA-Change 00:07:54.453 I/O Commands 00:07:54.453 ------------ 00:07:54.453 Flush (00h): Supported LBA-Change 00:07:54.453 Write (01h): Supported LBA-Change 00:07:54.453 Read (02h): Supported 00:07:54.453 Compare (05h): Supported 00:07:54.453 Write Zeroes (08h): Supported LBA-Change 00:07:54.453 Dataset Management (09h): Supported LBA-Change 00:07:54.453 Unknown (0Ch): Supported 00:07:54.453 Unknown (12h): Supported 00:07:54.453 Copy (19h): Supported LBA-Change 00:07:54.453 Unknown (1Dh): Supported LBA-Change 00:07:54.453 00:07:54.453 Error Log 00:07:54.453 ========= 00:07:54.453 00:07:54.453 Arbitration 00:07:54.453 =========== 00:07:54.453 Arbitration Burst: no limit 00:07:54.453 00:07:54.453 Power Management 00:07:54.453 ================ 00:07:54.453 Number of Power States: 1 00:07:54.453 Current Power State: Power State #0 00:07:54.453 Power State #0: 00:07:54.453 Max Power: 25.00 W 00:07:54.453 Non-Operational State: Operational 00:07:54.453 Entry Latency: 16 microseconds 00:07:54.453 Exit Latency: 4 microseconds 00:07:54.453 Relative Read Throughput: 0 00:07:54.453 Relative Read Latency: 0 00:07:54.453 Relative Write Throughput: 0 00:07:54.453 Relative Write Latency: 0 00:07:54.453 Idle Power: Not Reported 00:07:54.453 Active Power: Not Reported 00:07:54.453 Non-Operational Permissive Mode: Not Supported 00:07:54.453 00:07:54.453 Health Information 00:07:54.453 ================== 00:07:54.453 Critical Warnings: 00:07:54.453 Available Spare Space: OK 00:07:54.453 Temperature: [2024-09-30 19:51:38.719026] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63190 terminated unexpected 00:07:54.453 OK 00:07:54.453 Device Reliability: OK 00:07:54.453 Read Only: No 00:07:54.453 Volatile Memory Backup: OK 00:07:54.453 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.453 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.453 Available Spare: 0% 00:07:54.453 Available Spare Threshold: 0% 00:07:54.453 Life Percentage Used: 0% 00:07:54.453 Data Units Read: 1071 00:07:54.453 Data Units Written: 935 00:07:54.453 Host Read Commands: 56829 00:07:54.453 Host Write Commands: 55554 00:07:54.453 Controller Busy Time: 0 minutes 00:07:54.453 Power Cycles: 0 00:07:54.453 Power On Hours: 0 hours 00:07:54.453 Unsafe Shutdowns: 0 00:07:54.453 Unrecoverable Media Errors: 0 00:07:54.453 Lifetime Error Log Entries: 0 00:07:54.453 Warning Temperature Time: 0 minutes 00:07:54.453 Critical Temperature Time: 0 minutes 00:07:54.453 00:07:54.453 Number of Queues 00:07:54.453 ================ 00:07:54.453 Number of I/O Submission Queues: 64 00:07:54.453 Number of I/O Completion Queues: 64 00:07:54.453 00:07:54.453 ZNS Specific Controller Data 00:07:54.453 ============================ 00:07:54.454 Zone Append Size Limit: 0 00:07:54.454 00:07:54.454 00:07:54.454 Active Namespaces 00:07:54.454 ================= 00:07:54.454 Namespace ID:1 00:07:54.454 Error Recovery Timeout: Unlimited 00:07:54.454 Command Set Identifier: NVM (00h) 00:07:54.454 Deallocate: Supported 00:07:54.454 Deallocated/Unwritten Error: Supported 00:07:54.454 Deallocated Read Value: All 0x00 00:07:54.454 Deallocate in Write Zeroes: Not Supported 00:07:54.454 Deallocated Guard Field: 0xFFFF 00:07:54.454 Flush: Supported 00:07:54.454 Reservation: Not Supported 00:07:54.454 Namespace Sharing Capabilities: Private 00:07:54.454 Size (in LBAs): 1310720 (5GiB) 00:07:54.454 Capacity (in LBAs): 1310720 (5GiB) 00:07:54.454 Utilization (in LBAs): 1310720 (5GiB) 00:07:54.454 Thin Provisioning: Not Supported 00:07:54.454 Per-NS Atomic Units: No 00:07:54.454 Maximum Single Source Range Length: 128 00:07:54.454 Maximum Copy Length: 128 00:07:54.454 Maximum Source Range Count: 128 00:07:54.454 NGUID/EUI64 Never Reused: No 00:07:54.454 Namespace Write Protected: No 00:07:54.454 Number of LBA Formats: 8 00:07:54.454 Current LBA Format: LBA Format #04 00:07:54.454 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.454 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.454 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.454 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.454 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.454 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.454 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.454 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.454 00:07:54.454 NVM Specific Namespace Data 00:07:54.454 =========================== 00:07:54.454 Logical Block Storage Tag Mask: 0 00:07:54.454 Protection Information Capabilities: 00:07:54.454 16b Guard Protection Information Storage Tag Support: No 00:07:54.454 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.454 Storage Tag Check Read Support: No 00:07:54.454 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.454 ===================================================== 00:07:54.454 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.454 ===================================================== 00:07:54.454 Controller Capabilities/Features 00:07:54.454 ================================ 00:07:54.454 Vendor ID: 1b36 00:07:54.454 Subsystem Vendor ID: 1af4 00:07:54.454 Serial Number: 12343 00:07:54.454 Model Number: QEMU NVMe Ctrl 00:07:54.454 Firmware Version: 8.0.0 00:07:54.454 Recommended Arb Burst: 6 00:07:54.454 IEEE OUI Identifier: 00 54 52 00:07:54.454 Multi-path I/O 00:07:54.454 May have multiple subsystem ports: No 00:07:54.454 May have multiple controllers: Yes 00:07:54.454 Associated with SR-IOV VF: No 00:07:54.454 Max Data Transfer Size: 524288 00:07:54.454 Max Number of Namespaces: 256 00:07:54.454 Max Number of I/O Queues: 64 00:07:54.454 NVMe Specification Version (VS): 1.4 00:07:54.454 NVMe Specification Version (Identify): 1.4 00:07:54.454 Maximum Queue Entries: 2048 00:07:54.454 Contiguous Queues Required: Yes 00:07:54.454 Arbitration Mechanisms Supported 00:07:54.454 Weighted Round Robin: Not Supported 00:07:54.454 Vendor Specific: Not Supported 00:07:54.454 Reset Timeout: 7500 ms 00:07:54.454 Doorbell Stride: 4 bytes 00:07:54.454 NVM Subsystem Reset: Not Supported 00:07:54.454 Command Sets Supported 00:07:54.454 NVM Command Set: Supported 00:07:54.454 Boot Partition: Not Supported 00:07:54.454 Memory Page Size Minimum: 4096 bytes 00:07:54.454 Memory Page Size Maximum: 65536 bytes 00:07:54.454 Persistent Memory Region: Not Supported 00:07:54.454 Optional Asynchronous Events Supported 00:07:54.454 Namespace Attribute Notices: Supported 00:07:54.454 Firmware Activation Notices: Not Supported 00:07:54.454 ANA Change Notices: Not Supported 00:07:54.454 PLE Aggregate Log Change Notices: Not Supported 00:07:54.454 LBA Status Info Alert Notices: Not Supported 00:07:54.454 EGE Aggregate Log Change Notices: Not Supported 00:07:54.454 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.454 Zone Descriptor Change Notices: Not Supported 00:07:54.454 Discovery Log Change Notices: Not Supported 00:07:54.454 Controller Attributes 00:07:54.454 128-bit Host Identifier: Not Supported 00:07:54.454 Non-Operational Permissive Mode: Not Supported 00:07:54.454 NVM Sets: Not Supported 00:07:54.454 Read Recovery Levels: Not Supported 00:07:54.454 Endurance Groups: Supported 00:07:54.454 Predictable Latency Mode: Not Supported 00:07:54.454 Traffic Based Keep ALive: Not Supported 00:07:54.454 Namespace Granularity: Not Supported 00:07:54.454 SQ Associations: Not Supported 00:07:54.454 UUID List: Not Supported 00:07:54.454 Multi-Domain Subsystem: Not Supported 00:07:54.454 Fixed Capacity Management: Not Supported 00:07:54.454 Variable Capacity Management: Not Supported 00:07:54.454 Delete Endurance Group: Not Supported 00:07:54.454 Delete NVM Set: Not Supported 00:07:54.454 Extended LBA Formats Supported: Supported 00:07:54.454 Flexible Data Placement Supported: Supported 00:07:54.454 00:07:54.454 Controller Memory Buffer Support 00:07:54.454 ================================ 00:07:54.454 Supported: No 00:07:54.454 00:07:54.454 Persistent Memory Region Support 00:07:54.454 ================================ 00:07:54.454 Supported: No 00:07:54.454 00:07:54.454 Admin Command Set Attributes 00:07:54.454 ============================ 00:07:54.454 Security Send/Receive: Not Supported 00:07:54.454 Format NVM: Supported 00:07:54.454 Firmware Activate/Download: Not Supported 00:07:54.454 Namespace Management: Supported 00:07:54.454 Device Self-Test: Not Supported 00:07:54.454 Directives: Supported 00:07:54.454 NVMe-MI: Not Supported 00:07:54.454 Virtualization Management: Not Supported 00:07:54.454 Doorbell Buffer Config: Supported 00:07:54.454 Get LBA Status Capability: Not Supported 00:07:54.454 Command & Feature Lockdown Capability: Not Supported 00:07:54.454 Abort Command Limit: 4 00:07:54.454 Async Event Request Limit: 4 00:07:54.454 Number of Firmware Slots: N/A 00:07:54.454 Firmware Slot 1 Read-Only: N/A 00:07:54.454 Firmware Activation Without Reset: N/A 00:07:54.454 Multiple Update Detection Support: N/A 00:07:54.454 Firmware Update Granularity: No Information Provided 00:07:54.454 Per-Namespace SMART Log: Yes 00:07:54.454 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.454 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:54.454 Command Effects Log Page: Supported 00:07:54.454 Get Log Page Extended Data: Supported 00:07:54.454 Telemetry Log Pages: Not Supported 00:07:54.454 Persistent Event Log Pages: Not Supported 00:07:54.454 Supported Log Pages Log Page: May Support 00:07:54.454 Commands Supported & Effects Log Page: Not Supported 00:07:54.454 Feature Identifiers & Effects Log Page:May Support 00:07:54.454 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.454 Data Area 4 for Telemetry Log: Not Supported 00:07:54.454 Error Log Page Entries Supported: 1 00:07:54.454 Keep Alive: Not Supported 00:07:54.454 00:07:54.454 NVM Command Set Attributes 00:07:54.455 ========================== 00:07:54.455 Submission Queue Entry Size 00:07:54.455 Max: 64 00:07:54.455 Min: 64 00:07:54.455 Completion Queue Entry Size 00:07:54.455 Max: 16 00:07:54.455 Min: 16 00:07:54.455 Number of Namespaces: 256 00:07:54.455 Compare Command: Supported 00:07:54.455 Write Uncorrectable Command: Not Supported 00:07:54.455 Dataset Management Command: Supported 00:07:54.455 Write Zeroes Command: Supported 00:07:54.455 Set Features Save Field: Supported 00:07:54.455 Reservations: Not Supported 00:07:54.455 Timestamp: Supported 00:07:54.455 Copy: Supported 00:07:54.455 Volatile Write Cache: Present 00:07:54.455 Atomic Write Unit (Normal): 1 00:07:54.455 Atomic Write Unit (PFail): 1 00:07:54.455 Atomic Compare & Write Unit: 1 00:07:54.455 Fused Compare & Write: Not Supported 00:07:54.455 Scatter-Gather List 00:07:54.455 SGL Command Set: Supported 00:07:54.455 SGL Keyed: Not Supported 00:07:54.455 SGL Bit Bucket Descriptor: Not Supported 00:07:54.455 SGL Metadata Pointer: Not Supported 00:07:54.455 Oversized SGL: Not Supported 00:07:54.455 SGL Metadata Address: Not Supported 00:07:54.455 SGL Offset: Not Supported 00:07:54.455 Transport SGL Data Block: Not Supported 00:07:54.455 Replay Protected Memory Block: Not Supported 00:07:54.455 00:07:54.455 Firmware Slot Information 00:07:54.455 ========================= 00:07:54.455 Active slot: 1 00:07:54.455 Slot 1 Firmware Revision: 1.0 00:07:54.455 00:07:54.455 00:07:54.455 Commands Supported and Effects 00:07:54.455 ============================== 00:07:54.455 Admin Commands 00:07:54.455 -------------- 00:07:54.455 Delete I/O Submission Queue (00h): Supported 00:07:54.455 Create I/O Submission Queue (01h): Supported 00:07:54.455 Get Log Page (02h): Supported 00:07:54.455 Delete I/O Completion Queue (04h): Supported 00:07:54.455 Create I/O Completion Queue (05h): Supported 00:07:54.455 Identify (06h): Supported 00:07:54.455 Abort (08h): Supported 00:07:54.455 Set Features (09h): Supported 00:07:54.455 Get Features (0Ah): Supported 00:07:54.455 Asynchronous Event Request (0Ch): Supported 00:07:54.455 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.455 Directive Send (19h): Supported 00:07:54.455 Directive Receive (1Ah): Supported 00:07:54.455 Virtualization Management (1Ch): Supported 00:07:54.455 Doorbell Buffer Config (7Ch): Supported 00:07:54.455 Format NVM (80h): Supported LBA-Change 00:07:54.455 I/O Commands 00:07:54.455 ------------ 00:07:54.455 Flush (00h): Supported LBA-Change 00:07:54.455 Write (01h): Supported LBA-Change 00:07:54.455 Read (02h): Supported 00:07:54.455 Compare (05h): Supported 00:07:54.455 Write Zeroes (08h): Supported LBA-Change 00:07:54.455 Dataset Management (09h): Supported LBA-Change 00:07:54.455 Unknown (0Ch): Supported 00:07:54.455 Unknown (12h): Supported 00:07:54.455 Copy (19h): Supported LBA-Change 00:07:54.455 Unknown (1Dh): Supported LBA-Change 00:07:54.455 00:07:54.455 Error Log 00:07:54.455 ========= 00:07:54.455 00:07:54.455 Arbitration 00:07:54.455 =========== 00:07:54.455 Arbitration Burst: no limit 00:07:54.455 00:07:54.455 Power Management 00:07:54.455 ================ 00:07:54.455 Number of Power States: 1 00:07:54.455 Current Power State: Power State #0 00:07:54.455 Power State #0: 00:07:54.455 Max Power: 25.00 W 00:07:54.455 Non-Operational State: Operational 00:07:54.455 Entry Latency: 16 microseconds 00:07:54.455 Exit Latency: 4 microseconds 00:07:54.455 Relative Read Throughput: 0 00:07:54.455 Relative Read Latency: 0 00:07:54.455 Relative Write Throughput: 0 00:07:54.455 Relative Write Latency: 0 00:07:54.455 Idle Power: Not Reported 00:07:54.455 Active Power: Not Reported 00:07:54.455 Non-Operational Permissive Mode: Not Supported 00:07:54.455 00:07:54.455 Health Information 00:07:54.455 ================== 00:07:54.455 Critical Warnings: 00:07:54.455 Available Spare Space: OK 00:07:54.455 Temperature: OK 00:07:54.455 Device Reliability: OK 00:07:54.455 Read Only: No 00:07:54.455 Volatile Memory Backup: OK 00:07:54.455 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.455 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.455 Available Spare: 0% 00:07:54.455 Available Spare Threshold: 0% 00:07:54.455 Life Percentage Used: 0% 00:07:54.455 Data Units Read: 940 00:07:54.455 Data Units Written: 869 00:07:54.455 Host Read Commands: 40724 00:07:54.455 Host Write Commands: 40147 00:07:54.455 Controller Busy Time: 0 minutes 00:07:54.455 Power Cycles: 0 00:07:54.455 Power On Hours: 0 hours 00:07:54.455 Unsafe Shutdowns: 0 00:07:54.455 Unrecoverable Media Errors: 0 00:07:54.455 Lifetime Error Log Entries: 0 00:07:54.455 Warning Temperature Time: 0 minutes 00:07:54.455 Critical Temperature Time: 0 minutes 00:07:54.455 00:07:54.455 Number of Queues 00:07:54.455 ================ 00:07:54.455 Number of I/O Submission Queues: 64 00:07:54.455 Number of I/O Completion Queues: 64 00:07:54.455 00:07:54.455 ZNS Specific Controller Data 00:07:54.455 ============================ 00:07:54.455 Zone Append Size Limit: 0 00:07:54.455 00:07:54.455 00:07:54.455 Active Namespaces 00:07:54.455 ================= 00:07:54.455 Namespace ID:1 00:07:54.455 Error Recovery Timeout: Unlimited 00:07:54.455 Command Set Identifier: NVM (00h) 00:07:54.455 Deallocate: Supported 00:07:54.455 Deallocated/Unwritten Error: Supported 00:07:54.455 Deallocated Read Value: All 0x00 00:07:54.455 Deallocate in Write Zeroes: Not Supported 00:07:54.455 Deallocated Guard Field: 0xFFFF 00:07:54.455 Flush: Supported 00:07:54.455 Reservation: Not Supported 00:07:54.455 Namespace Sharing Capabilities: Multiple Controllers 00:07:54.455 Size (in LBAs): 262144 (1GiB) 00:07:54.455 Capacity (in LBAs): 262144 (1GiB) 00:07:54.455 Utilization (in LBAs): 262144 (1GiB) 00:07:54.455 Thin Provisioning: Not Supported 00:07:54.455 Per-NS Atomic Units: No 00:07:54.455 Maximum Single Source Range Length: 128 00:07:54.455 Maximum Copy Length: 128 00:07:54.455 Maximum Source Range Count: 128 00:07:54.455 NGUID/EUI64 Never Reused: No 00:07:54.455 Namespace Write Protected: No 00:07:54.455 Endurance group ID: 1 00:07:54.455 Number of LBA Formats: 8 00:07:54.455 Current LBA Format: LBA Format #04 00:07:54.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.455 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.455 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.455 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.455 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.455 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.455 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.455 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.455 00:07:54.455 Get Feature FDP: 00:07:54.456 ================ 00:07:54.456 Enabled: Yes 00:07:54.456 FDP configuration index: 0 00:07:54.456 00:07:54.456 FDP configurations log page 00:07:54.456 =========================== 00:07:54.456 Number of FDP configurations: 1 00:07:54.456 Version: 0 00:07:54.456 Size: 112 00:07:54.456 FDP Configuration Descriptor: 0 00:07:54.456 Descriptor Size: 96 00:07:54.456 Reclaim Group Identifier format: 2 00:07:54.456 FDP Volatile Write Cache: Not Present 00:07:54.456 FDP Configuration: Valid 00:07:54.456 Vendor Specific Size: 0 00:07:54.456 Number of Reclaim Groups: 2 00:07:54.456 Number of Recalim Unit Handles: 8 00:07:54.456 Max Placement Identifiers: 128 00:07:54.456 Number of Namespaces Suppprted: 256 00:07:54.456 Reclaim unit Nominal Size: 6000000 bytes 00:07:54.456 Estimated Reclaim Unit Time Limit: Not Reported 00:07:54.456 RUH Desc #000: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #001: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #002: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #003: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #004: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #005: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #006: RUH Type: Initially Isolated 00:07:54.456 RUH Desc #007: RUH Type: Initially Isolated 00:07:54.456 00:07:54.456 FDP reclaim unit handle usage log page 00:07:54.456 ====================================== 00:07:54.456 Number of Reclaim Unit Handles: 8 00:07:54.456 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:54.456 RUH Usage Desc #001: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #002: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #003: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #004: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #005: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #006: RUH Attributes: Unused 00:07:54.456 RUH Usage Desc #007: RUH Attributes: Unused 00:07:54.456 00:07:54.456 FDP statistics log page 00:07:54.456 ======================= 00:07:54.456 Host bytes with metadata written: 551854080 00:07:54.456 Medi[2024-09-30 19:51:38.720156] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63190 terminated unexpected 00:07:54.456 a bytes with metadata written: 551931904 00:07:54.456 Media bytes erased: 0 00:07:54.456 00:07:54.456 FDP events log page 00:07:54.456 =================== 00:07:54.456 Number of FDP events: 0 00:07:54.456 00:07:54.456 NVM Specific Namespace Data 00:07:54.456 =========================== 00:07:54.456 Logical Block Storage Tag Mask: 0 00:07:54.456 Protection Information Capabilities: 00:07:54.456 16b Guard Protection Information Storage Tag Support: No 00:07:54.456 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.456 Storage Tag Check Read Support: No 00:07:54.456 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.456 ===================================================== 00:07:54.456 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.456 ===================================================== 00:07:54.456 Controller Capabilities/Features 00:07:54.456 ================================ 00:07:54.456 Vendor ID: 1b36 00:07:54.456 Subsystem Vendor ID: 1af4 00:07:54.456 Serial Number: 12342 00:07:54.456 Model Number: QEMU NVMe Ctrl 00:07:54.456 Firmware Version: 8.0.0 00:07:54.456 Recommended Arb Burst: 6 00:07:54.456 IEEE OUI Identifier: 00 54 52 00:07:54.456 Multi-path I/O 00:07:54.456 May have multiple subsystem ports: No 00:07:54.456 May have multiple controllers: No 00:07:54.456 Associated with SR-IOV VF: No 00:07:54.456 Max Data Transfer Size: 524288 00:07:54.456 Max Number of Namespaces: 256 00:07:54.456 Max Number of I/O Queues: 64 00:07:54.456 NVMe Specification Version (VS): 1.4 00:07:54.456 NVMe Specification Version (Identify): 1.4 00:07:54.456 Maximum Queue Entries: 2048 00:07:54.456 Contiguous Queues Required: Yes 00:07:54.456 Arbitration Mechanisms Supported 00:07:54.456 Weighted Round Robin: Not Supported 00:07:54.456 Vendor Specific: Not Supported 00:07:54.456 Reset Timeout: 7500 ms 00:07:54.456 Doorbell Stride: 4 bytes 00:07:54.456 NVM Subsystem Reset: Not Supported 00:07:54.456 Command Sets Supported 00:07:54.456 NVM Command Set: Supported 00:07:54.456 Boot Partition: Not Supported 00:07:54.456 Memory Page Size Minimum: 4096 bytes 00:07:54.456 Memory Page Size Maximum: 65536 bytes 00:07:54.456 Persistent Memory Region: Not Supported 00:07:54.456 Optional Asynchronous Events Supported 00:07:54.456 Namespace Attribute Notices: Supported 00:07:54.456 Firmware Activation Notices: Not Supported 00:07:54.456 ANA Change Notices: Not Supported 00:07:54.456 PLE Aggregate Log Change Notices: Not Supported 00:07:54.456 LBA Status Info Alert Notices: Not Supported 00:07:54.456 EGE Aggregate Log Change Notices: Not Supported 00:07:54.456 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.456 Zone Descriptor Change Notices: Not Supported 00:07:54.456 Discovery Log Change Notices: Not Supported 00:07:54.456 Controller Attributes 00:07:54.456 128-bit Host Identifier: Not Supported 00:07:54.456 Non-Operational Permissive Mode: Not Supported 00:07:54.456 NVM Sets: Not Supported 00:07:54.456 Read Recovery Levels: Not Supported 00:07:54.456 Endurance Groups: Not Supported 00:07:54.456 Predictable Latency Mode: Not Supported 00:07:54.456 Traffic Based Keep ALive: Not Supported 00:07:54.456 Namespace Granularity: Not Supported 00:07:54.457 SQ Associations: Not Supported 00:07:54.457 UUID List: Not Supported 00:07:54.457 Multi-Domain Subsystem: Not Supported 00:07:54.457 Fixed Capacity Management: Not Supported 00:07:54.457 Variable Capacity Management: Not Supported 00:07:54.457 Delete Endurance Group: Not Supported 00:07:54.457 Delete NVM Set: Not Supported 00:07:54.457 Extended LBA Formats Supported: Supported 00:07:54.457 Flexible Data Placement Supported: Not Supported 00:07:54.457 00:07:54.457 Controller Memory Buffer Support 00:07:54.457 ================================ 00:07:54.457 Supported: No 00:07:54.457 00:07:54.457 Persistent Memory Region Support 00:07:54.457 ================================ 00:07:54.457 Supported: No 00:07:54.457 00:07:54.457 Admin Command Set Attributes 00:07:54.457 ============================ 00:07:54.457 Security Send/Receive: Not Supported 00:07:54.457 Format NVM: Supported 00:07:54.457 Firmware Activate/Download: Not Supported 00:07:54.457 Namespace Management: Supported 00:07:54.457 Device Self-Test: Not Supported 00:07:54.457 Directives: Supported 00:07:54.457 NVMe-MI: Not Supported 00:07:54.457 Virtualization Management: Not Supported 00:07:54.457 Doorbell Buffer Config: Supported 00:07:54.457 Get LBA Status Capability: Not Supported 00:07:54.457 Command & Feature Lockdown Capability: Not Supported 00:07:54.457 Abort Command Limit: 4 00:07:54.457 Async Event Request Limit: 4 00:07:54.457 Number of Firmware Slots: N/A 00:07:54.457 Firmware Slot 1 Read-Only: N/A 00:07:54.457 Firmware Activation Without Reset: N/A 00:07:54.457 Multiple Update Detection Support: N/A 00:07:54.457 Firmware Update Granularity: No Information Provided 00:07:54.457 Per-Namespace SMART Log: Yes 00:07:54.457 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.457 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:54.457 Command Effects Log Page: Supported 00:07:54.457 Get Log Page Extended Data: Supported 00:07:54.457 Telemetry Log Pages: Not Supported 00:07:54.457 Persistent Event Log Pages: Not Supported 00:07:54.457 Supported Log Pages Log Page: May Support 00:07:54.457 Commands Supported & Effects Log Page: Not Supported 00:07:54.457 Feature Identifiers & Effects Log Page:May Support 00:07:54.457 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.457 Data Area 4 for Telemetry Log: Not Supported 00:07:54.457 Error Log Page Entries Supported: 1 00:07:54.457 Keep Alive: Not Supported 00:07:54.457 00:07:54.457 NVM Command Set Attributes 00:07:54.457 ========================== 00:07:54.457 Submission Queue Entry Size 00:07:54.457 Max: 64 00:07:54.457 Min: 64 00:07:54.457 Completion Queue Entry Size 00:07:54.457 Max: 16 00:07:54.457 Min: 16 00:07:54.457 Number of Namespaces: 256 00:07:54.457 Compare Command: Supported 00:07:54.457 Write Uncorrectable Command: Not Supported 00:07:54.457 Dataset Management Command: Supported 00:07:54.457 Write Zeroes Command: Supported 00:07:54.457 Set Features Save Field: Supported 00:07:54.457 Reservations: Not Supported 00:07:54.457 Timestamp: Supported 00:07:54.457 Copy: Supported 00:07:54.457 Volatile Write Cache: Present 00:07:54.457 Atomic Write Unit (Normal): 1 00:07:54.457 Atomic Write Unit (PFail): 1 00:07:54.457 Atomic Compare & Write Unit: 1 00:07:54.457 Fused Compare & Write: Not Supported 00:07:54.457 Scatter-Gather List 00:07:54.457 SGL Command Set: Supported 00:07:54.457 SGL Keyed: Not Supported 00:07:54.457 SGL Bit Bucket Descriptor: Not Supported 00:07:54.457 SGL Metadata Pointer: Not Supported 00:07:54.457 Oversized SGL: Not Supported 00:07:54.457 SGL Metadata Address: Not Supported 00:07:54.457 SGL Offset: Not Supported 00:07:54.457 Transport SGL Data Block: Not Supported 00:07:54.457 Replay Protected Memory Block: Not Supported 00:07:54.457 00:07:54.457 Firmware Slot Information 00:07:54.457 ========================= 00:07:54.457 Active slot: 1 00:07:54.457 Slot 1 Firmware Revision: 1.0 00:07:54.457 00:07:54.457 00:07:54.457 Commands Supported and Effects 00:07:54.457 ============================== 00:07:54.457 Admin Commands 00:07:54.457 -------------- 00:07:54.457 Delete I/O Submission Queue (00h): Supported 00:07:54.457 Create I/O Submission Queue (01h): Supported 00:07:54.457 Get Log Page (02h): Supported 00:07:54.457 Delete I/O Completion Queue (04h): Supported 00:07:54.457 Create I/O Completion Queue (05h): Supported 00:07:54.457 Identify (06h): Supported 00:07:54.457 Abort (08h): Supported 00:07:54.457 Set Features (09h): Supported 00:07:54.457 Get Features (0Ah): Supported 00:07:54.457 Asynchronous Event Request (0Ch): Supported 00:07:54.457 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.457 Directive Send (19h): Supported 00:07:54.457 Directive Receive (1Ah): Supported 00:07:54.457 Virtualization Management (1Ch): Supported 00:07:54.457 Doorbell Buffer Config (7Ch): Supported 00:07:54.457 Format NVM (80h): Supported LBA-Change 00:07:54.457 I/O Commands 00:07:54.457 ------------ 00:07:54.457 Flush (00h): Supported LBA-Change 00:07:54.457 Write (01h): Supported LBA-Change 00:07:54.457 Read (02h): Supported 00:07:54.457 Compare (05h): Supported 00:07:54.457 Write Zeroes (08h): Supported LBA-Change 00:07:54.457 Dataset Management (09h): Supported LBA-Change 00:07:54.457 Unknown (0Ch): Supported 00:07:54.457 Unknown (12h): Supported 00:07:54.457 Copy (19h): Supported LBA-Change 00:07:54.457 Unknown (1Dh): Supported LBA-Change 00:07:54.457 00:07:54.457 Error Log 00:07:54.457 ========= 00:07:54.457 00:07:54.457 Arbitration 00:07:54.457 =========== 00:07:54.457 Arbitration Burst: no limit 00:07:54.457 00:07:54.457 Power Management 00:07:54.457 ================ 00:07:54.457 Number of Power States: 1 00:07:54.457 Current Power State: Power State #0 00:07:54.457 Power State #0: 00:07:54.457 Max Power: 25.00 W 00:07:54.457 Non-Operational State: Operational 00:07:54.457 Entry Latency: 16 microseconds 00:07:54.457 Exit Latency: 4 microseconds 00:07:54.457 Relative Read Throughput: 0 00:07:54.457 Relative Read Latency: 0 00:07:54.457 Relative Write Throughput: 0 00:07:54.457 Relative Write Latency: 0 00:07:54.457 Idle Power: Not Reported 00:07:54.457 Active Power: Not Reported 00:07:54.457 Non-Operational Permissive Mode: Not Supported 00:07:54.457 00:07:54.457 Health Information 00:07:54.457 ================== 00:07:54.457 Critical Warnings: 00:07:54.457 Available Spare Space: OK 00:07:54.457 Temperature: OK 00:07:54.457 Device Reliability: OK 00:07:54.457 Read Only: No 00:07:54.457 Volatile Memory Backup: OK 00:07:54.457 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.457 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.457 Available Spare: 0% 00:07:54.457 Available Spare Threshold: 0% 00:07:54.457 Life Percentage Used: 0% 00:07:54.457 Data Units Read: 2257 00:07:54.457 Data Units Written: 2045 00:07:54.457 Host Read Commands: 117296 00:07:54.457 Host Write Commands: 115565 00:07:54.457 Controller Busy Time: 0 minutes 00:07:54.458 Power Cycles: 0 00:07:54.458 Power On Hours: 0 hours 00:07:54.458 Unsafe Shutdowns: 0 00:07:54.458 Unrecoverable Media Errors: 0 00:07:54.458 Lifetime Error Log Entries: 0 00:07:54.458 Warning Temperature Time: 0 minutes 00:07:54.458 Critical Temperature Time: 0 minutes 00:07:54.458 00:07:54.458 Number of Queues 00:07:54.458 ================ 00:07:54.458 Number of I/O Submission Queues: 64 00:07:54.458 Number of I/O Completion Queues: 64 00:07:54.458 00:07:54.458 ZNS Specific Controller Data 00:07:54.458 ============================ 00:07:54.458 Zone Append Size Limit: 0 00:07:54.458 00:07:54.458 00:07:54.458 Active Namespaces 00:07:54.458 ================= 00:07:54.458 Namespace ID:1 00:07:54.458 Error Recovery Timeout: Unlimited 00:07:54.458 Command Set Identifier: NVM (00h) 00:07:54.458 Deallocate: Supported 00:07:54.458 Deallocated/Unwritten Error: Supported 00:07:54.458 Deallocated Read Value: All 0x00 00:07:54.458 Deallocate in Write Zeroes: Not Supported 00:07:54.458 Deallocated Guard Field: 0xFFFF 00:07:54.458 Flush: Supported 00:07:54.458 Reservation: Not Supported 00:07:54.458 Namespace Sharing Capabilities: Private 00:07:54.458 Size (in LBAs): 1048576 (4GiB) 00:07:54.458 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.458 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.458 Thin Provisioning: Not Supported 00:07:54.458 Per-NS Atomic Units: No 00:07:54.458 Maximum Single Source Range Length: 128 00:07:54.458 Maximum Copy Length: 128 00:07:54.458 Maximum Source Range Count: 128 00:07:54.458 NGUID/EUI64 Never Reused: No 00:07:54.458 Namespace Write Protected: No 00:07:54.458 Number of LBA Formats: 8 00:07:54.458 Current LBA Format: LBA Format #04 00:07:54.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.458 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.458 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.458 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.458 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.458 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.458 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.458 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.458 00:07:54.458 NVM Specific Namespace Data 00:07:54.458 =========================== 00:07:54.458 Logical Block Storage Tag Mask: 0 00:07:54.458 Protection Information Capabilities: 00:07:54.458 16b Guard Protection Information Storage Tag Support: No 00:07:54.458 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.458 Storage Tag Check Read Support: No 00:07:54.458 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Namespace ID:2 00:07:54.458 Error Recovery Timeout: Unlimited 00:07:54.458 Command Set Identifier: NVM (00h) 00:07:54.458 Deallocate: Supported 00:07:54.458 Deallocated/Unwritten Error: Supported 00:07:54.458 Deallocated Read Value: All 0x00 00:07:54.458 Deallocate in Write Zeroes: Not Supported 00:07:54.458 Deallocated Guard Field: 0xFFFF 00:07:54.458 Flush: Supported 00:07:54.458 Reservation: Not Supported 00:07:54.458 Namespace Sharing Capabilities: Private 00:07:54.458 Size (in LBAs): 1048576 (4GiB) 00:07:54.458 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.458 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.458 Thin Provisioning: Not Supported 00:07:54.458 Per-NS Atomic Units: No 00:07:54.458 Maximum Single Source Range Length: 128 00:07:54.458 Maximum Copy Length: 128 00:07:54.458 Maximum Source Range Count: 128 00:07:54.458 NGUID/EUI64 Never Reused: No 00:07:54.458 Namespace Write Protected: No 00:07:54.458 Number of LBA Formats: 8 00:07:54.458 Current LBA Format: LBA Format #04 00:07:54.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.458 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.458 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.458 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.458 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.458 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.458 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.458 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.458 00:07:54.458 NVM Specific Namespace Data 00:07:54.458 =========================== 00:07:54.458 Logical Block Storage Tag Mask: 0 00:07:54.458 Protection Information Capabilities: 00:07:54.458 16b Guard Protection Information Storage Tag Support: No 00:07:54.458 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.458 Storage Tag Check Read Support: No 00:07:54.458 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Namespace ID:3 00:07:54.458 Error Recovery Timeout: Unlimited 00:07:54.458 Command Set Identifier: NVM (00h) 00:07:54.458 Deallocate: Supported 00:07:54.458 Deallocated/Unwritten Error: Supported 00:07:54.458 Deallocated Read Value: All 0x00 00:07:54.458 Deallocate in Write Zeroes: Not Supported 00:07:54.458 Deallocated Guard Field: 0xFFFF 00:07:54.458 Flush: Supported 00:07:54.458 Reservation: Not Supported 00:07:54.458 Namespace Sharing Capabilities: Private 00:07:54.458 Size (in LBAs): 1048576 (4GiB) 00:07:54.458 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.458 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.458 Thin Provisioning: Not Supported 00:07:54.458 Per-NS Atomic Units: No 00:07:54.458 Maximum Single Source Range Length: 128 00:07:54.458 Maximum Copy Length: 128 00:07:54.458 Maximum Source Range Count: 128 00:07:54.458 NGUID/EUI64 Never Reused: No 00:07:54.458 Namespace Write Protected: No 00:07:54.458 Number of LBA Formats: 8 00:07:54.458 Current LBA Format: LBA Format #04 00:07:54.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.458 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.458 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.458 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.458 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.458 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.458 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.458 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.458 00:07:54.458 NVM Specific Namespace Data 00:07:54.458 =========================== 00:07:54.458 Logical Block Storage Tag Mask: 0 00:07:54.458 Protection Information Capabilities: 00:07:54.458 16b Guard Protection Information Storage Tag Support: No 00:07:54.458 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.458 Storage Tag Check Read Support: No 00:07:54.458 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.458 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.459 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:54.459 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:54.717 ===================================================== 00:07:54.717 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.717 ===================================================== 00:07:54.717 Controller Capabilities/Features 00:07:54.717 ================================ 00:07:54.717 Vendor ID: 1b36 00:07:54.717 Subsystem Vendor ID: 1af4 00:07:54.717 Serial Number: 12340 00:07:54.717 Model Number: QEMU NVMe Ctrl 00:07:54.717 Firmware Version: 8.0.0 00:07:54.717 Recommended Arb Burst: 6 00:07:54.717 IEEE OUI Identifier: 00 54 52 00:07:54.717 Multi-path I/O 00:07:54.717 May have multiple subsystem ports: No 00:07:54.717 May have multiple controllers: No 00:07:54.717 Associated with SR-IOV VF: No 00:07:54.717 Max Data Transfer Size: 524288 00:07:54.717 Max Number of Namespaces: 256 00:07:54.717 Max Number of I/O Queues: 64 00:07:54.717 NVMe Specification Version (VS): 1.4 00:07:54.717 NVMe Specification Version (Identify): 1.4 00:07:54.717 Maximum Queue Entries: 2048 00:07:54.717 Contiguous Queues Required: Yes 00:07:54.717 Arbitration Mechanisms Supported 00:07:54.717 Weighted Round Robin: Not Supported 00:07:54.717 Vendor Specific: Not Supported 00:07:54.717 Reset Timeout: 7500 ms 00:07:54.717 Doorbell Stride: 4 bytes 00:07:54.717 NVM Subsystem Reset: Not Supported 00:07:54.717 Command Sets Supported 00:07:54.717 NVM Command Set: Supported 00:07:54.717 Boot Partition: Not Supported 00:07:54.717 Memory Page Size Minimum: 4096 bytes 00:07:54.717 Memory Page Size Maximum: 65536 bytes 00:07:54.717 Persistent Memory Region: Not Supported 00:07:54.717 Optional Asynchronous Events Supported 00:07:54.717 Namespace Attribute Notices: Supported 00:07:54.717 Firmware Activation Notices: Not Supported 00:07:54.718 ANA Change Notices: Not Supported 00:07:54.718 PLE Aggregate Log Change Notices: Not Supported 00:07:54.718 LBA Status Info Alert Notices: Not Supported 00:07:54.718 EGE Aggregate Log Change Notices: Not Supported 00:07:54.718 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.718 Zone Descriptor Change Notices: Not Supported 00:07:54.718 Discovery Log Change Notices: Not Supported 00:07:54.718 Controller Attributes 00:07:54.718 128-bit Host Identifier: Not Supported 00:07:54.718 Non-Operational Permissive Mode: Not Supported 00:07:54.718 NVM Sets: Not Supported 00:07:54.718 Read Recovery Levels: Not Supported 00:07:54.718 Endurance Groups: Not Supported 00:07:54.718 Predictable Latency Mode: Not Supported 00:07:54.718 Traffic Based Keep ALive: Not Supported 00:07:54.718 Namespace Granularity: Not Supported 00:07:54.718 SQ Associations: Not Supported 00:07:54.718 UUID List: Not Supported 00:07:54.718 Multi-Domain Subsystem: Not Supported 00:07:54.718 Fixed Capacity Management: Not Supported 00:07:54.718 Variable Capacity Management: Not Supported 00:07:54.718 Delete Endurance Group: Not Supported 00:07:54.718 Delete NVM Set: Not Supported 00:07:54.718 Extended LBA Formats Supported: Supported 00:07:54.718 Flexible Data Placement Supported: Not Supported 00:07:54.718 00:07:54.718 Controller Memory Buffer Support 00:07:54.718 ================================ 00:07:54.718 Supported: No 00:07:54.718 00:07:54.718 Persistent Memory Region Support 00:07:54.718 ================================ 00:07:54.718 Supported: No 00:07:54.718 00:07:54.718 Admin Command Set Attributes 00:07:54.718 ============================ 00:07:54.718 Security Send/Receive: Not Supported 00:07:54.718 Format NVM: Supported 00:07:54.718 Firmware Activate/Download: Not Supported 00:07:54.718 Namespace Management: Supported 00:07:54.718 Device Self-Test: Not Supported 00:07:54.718 Directives: Supported 00:07:54.718 NVMe-MI: Not Supported 00:07:54.718 Virtualization Management: Not Supported 00:07:54.718 Doorbell Buffer Config: Supported 00:07:54.718 Get LBA Status Capability: Not Supported 00:07:54.718 Command & Feature Lockdown Capability: Not Supported 00:07:54.718 Abort Command Limit: 4 00:07:54.718 Async Event Request Limit: 4 00:07:54.718 Number of Firmware Slots: N/A 00:07:54.718 Firmware Slot 1 Read-Only: N/A 00:07:54.718 Firmware Activation Without Reset: N/A 00:07:54.718 Multiple Update Detection Support: N/A 00:07:54.718 Firmware Update Granularity: No Information Provided 00:07:54.718 Per-Namespace SMART Log: Yes 00:07:54.718 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.718 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:54.718 Command Effects Log Page: Supported 00:07:54.718 Get Log Page Extended Data: Supported 00:07:54.718 Telemetry Log Pages: Not Supported 00:07:54.718 Persistent Event Log Pages: Not Supported 00:07:54.718 Supported Log Pages Log Page: May Support 00:07:54.718 Commands Supported & Effects Log Page: Not Supported 00:07:54.718 Feature Identifiers & Effects Log Page:May Support 00:07:54.718 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.718 Data Area 4 for Telemetry Log: Not Supported 00:07:54.718 Error Log Page Entries Supported: 1 00:07:54.718 Keep Alive: Not Supported 00:07:54.718 00:07:54.718 NVM Command Set Attributes 00:07:54.718 ========================== 00:07:54.718 Submission Queue Entry Size 00:07:54.718 Max: 64 00:07:54.718 Min: 64 00:07:54.718 Completion Queue Entry Size 00:07:54.718 Max: 16 00:07:54.718 Min: 16 00:07:54.718 Number of Namespaces: 256 00:07:54.718 Compare Command: Supported 00:07:54.718 Write Uncorrectable Command: Not Supported 00:07:54.718 Dataset Management Command: Supported 00:07:54.718 Write Zeroes Command: Supported 00:07:54.718 Set Features Save Field: Supported 00:07:54.718 Reservations: Not Supported 00:07:54.718 Timestamp: Supported 00:07:54.718 Copy: Supported 00:07:54.718 Volatile Write Cache: Present 00:07:54.718 Atomic Write Unit (Normal): 1 00:07:54.718 Atomic Write Unit (PFail): 1 00:07:54.718 Atomic Compare & Write Unit: 1 00:07:54.718 Fused Compare & Write: Not Supported 00:07:54.718 Scatter-Gather List 00:07:54.718 SGL Command Set: Supported 00:07:54.718 SGL Keyed: Not Supported 00:07:54.718 SGL Bit Bucket Descriptor: Not Supported 00:07:54.718 SGL Metadata Pointer: Not Supported 00:07:54.718 Oversized SGL: Not Supported 00:07:54.718 SGL Metadata Address: Not Supported 00:07:54.718 SGL Offset: Not Supported 00:07:54.718 Transport SGL Data Block: Not Supported 00:07:54.718 Replay Protected Memory Block: Not Supported 00:07:54.718 00:07:54.718 Firmware Slot Information 00:07:54.718 ========================= 00:07:54.718 Active slot: 1 00:07:54.718 Slot 1 Firmware Revision: 1.0 00:07:54.718 00:07:54.718 00:07:54.718 Commands Supported and Effects 00:07:54.718 ============================== 00:07:54.718 Admin Commands 00:07:54.718 -------------- 00:07:54.718 Delete I/O Submission Queue (00h): Supported 00:07:54.718 Create I/O Submission Queue (01h): Supported 00:07:54.718 Get Log Page (02h): Supported 00:07:54.718 Delete I/O Completion Queue (04h): Supported 00:07:54.718 Create I/O Completion Queue (05h): Supported 00:07:54.718 Identify (06h): Supported 00:07:54.718 Abort (08h): Supported 00:07:54.718 Set Features (09h): Supported 00:07:54.718 Get Features (0Ah): Supported 00:07:54.718 Asynchronous Event Request (0Ch): Supported 00:07:54.718 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.718 Directive Send (19h): Supported 00:07:54.718 Directive Receive (1Ah): Supported 00:07:54.718 Virtualization Management (1Ch): Supported 00:07:54.718 Doorbell Buffer Config (7Ch): Supported 00:07:54.718 Format NVM (80h): Supported LBA-Change 00:07:54.718 I/O Commands 00:07:54.718 ------------ 00:07:54.718 Flush (00h): Supported LBA-Change 00:07:54.718 Write (01h): Supported LBA-Change 00:07:54.718 Read (02h): Supported 00:07:54.718 Compare (05h): Supported 00:07:54.718 Write Zeroes (08h): Supported LBA-Change 00:07:54.718 Dataset Management (09h): Supported LBA-Change 00:07:54.718 Unknown (0Ch): Supported 00:07:54.718 Unknown (12h): Supported 00:07:54.718 Copy (19h): Supported LBA-Change 00:07:54.718 Unknown (1Dh): Supported LBA-Change 00:07:54.718 00:07:54.718 Error Log 00:07:54.718 ========= 00:07:54.718 00:07:54.718 Arbitration 00:07:54.718 =========== 00:07:54.718 Arbitration Burst: no limit 00:07:54.718 00:07:54.718 Power Management 00:07:54.718 ================ 00:07:54.718 Number of Power States: 1 00:07:54.718 Current Power State: Power State #0 00:07:54.718 Power State #0: 00:07:54.718 Max Power: 25.00 W 00:07:54.718 Non-Operational State: Operational 00:07:54.718 Entry Latency: 16 microseconds 00:07:54.718 Exit Latency: 4 microseconds 00:07:54.718 Relative Read Throughput: 0 00:07:54.718 Relative Read Latency: 0 00:07:54.718 Relative Write Throughput: 0 00:07:54.718 Relative Write Latency: 0 00:07:54.718 Idle Power: Not Reported 00:07:54.718 Active Power: Not Reported 00:07:54.718 Non-Operational Permissive Mode: Not Supported 00:07:54.718 00:07:54.718 Health Information 00:07:54.718 ================== 00:07:54.718 Critical Warnings: 00:07:54.718 Available Spare Space: OK 00:07:54.718 Temperature: OK 00:07:54.718 Device Reliability: OK 00:07:54.718 Read Only: No 00:07:54.718 Volatile Memory Backup: OK 00:07:54.718 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.718 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.718 Available Spare: 0% 00:07:54.718 Available Spare Threshold: 0% 00:07:54.718 Life Percentage Used: 0% 00:07:54.718 Data Units Read: 679 00:07:54.718 Data Units Written: 607 00:07:54.718 Host Read Commands: 38082 00:07:54.718 Host Write Commands: 37868 00:07:54.718 Controller Busy Time: 0 minutes 00:07:54.718 Power Cycles: 0 00:07:54.718 Power On Hours: 0 hours 00:07:54.718 Unsafe Shutdowns: 0 00:07:54.718 Unrecoverable Media Errors: 0 00:07:54.718 Lifetime Error Log Entries: 0 00:07:54.718 Warning Temperature Time: 0 minutes 00:07:54.718 Critical Temperature Time: 0 minutes 00:07:54.718 00:07:54.718 Number of Queues 00:07:54.718 ================ 00:07:54.718 Number of I/O Submission Queues: 64 00:07:54.718 Number of I/O Completion Queues: 64 00:07:54.718 00:07:54.718 ZNS Specific Controller Data 00:07:54.718 ============================ 00:07:54.718 Zone Append Size Limit: 0 00:07:54.718 00:07:54.718 00:07:54.718 Active Namespaces 00:07:54.718 ================= 00:07:54.718 Namespace ID:1 00:07:54.718 Error Recovery Timeout: Unlimited 00:07:54.718 Command Set Identifier: NVM (00h) 00:07:54.718 Deallocate: Supported 00:07:54.718 Deallocated/Unwritten Error: Supported 00:07:54.718 Deallocated Read Value: All 0x00 00:07:54.718 Deallocate in Write Zeroes: Not Supported 00:07:54.718 Deallocated Guard Field: 0xFFFF 00:07:54.718 Flush: Supported 00:07:54.719 Reservation: Not Supported 00:07:54.719 Metadata Transferred as: Separate Metadata Buffer 00:07:54.719 Namespace Sharing Capabilities: Private 00:07:54.719 Size (in LBAs): 1548666 (5GiB) 00:07:54.719 Capacity (in LBAs): 1548666 (5GiB) 00:07:54.719 Utilization (in LBAs): 1548666 (5GiB) 00:07:54.719 Thin Provisioning: Not Supported 00:07:54.719 Per-NS Atomic Units: No 00:07:54.719 Maximum Single Source Range Length: 128 00:07:54.719 Maximum Copy Length: 128 00:07:54.719 Maximum Source Range Count: 128 00:07:54.719 NGUID/EUI64 Never Reused: No 00:07:54.719 Namespace Write Protected: No 00:07:54.719 Number of LBA Formats: 8 00:07:54.719 Current LBA Format: LBA Format #07 00:07:54.719 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.719 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.719 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.719 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.719 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.719 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.719 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.719 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.719 00:07:54.719 NVM Specific Namespace Data 00:07:54.719 =========================== 00:07:54.719 Logical Block Storage Tag Mask: 0 00:07:54.719 Protection Information Capabilities: 00:07:54.719 16b Guard Protection Information Storage Tag Support: No 00:07:54.719 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.719 Storage Tag Check Read Support: No 00:07:54.719 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.719 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:54.719 19:51:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:54.977 ===================================================== 00:07:54.977 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.977 ===================================================== 00:07:54.977 Controller Capabilities/Features 00:07:54.977 ================================ 00:07:54.977 Vendor ID: 1b36 00:07:54.977 Subsystem Vendor ID: 1af4 00:07:54.977 Serial Number: 12341 00:07:54.977 Model Number: QEMU NVMe Ctrl 00:07:54.977 Firmware Version: 8.0.0 00:07:54.977 Recommended Arb Burst: 6 00:07:54.977 IEEE OUI Identifier: 00 54 52 00:07:54.977 Multi-path I/O 00:07:54.977 May have multiple subsystem ports: No 00:07:54.977 May have multiple controllers: No 00:07:54.977 Associated with SR-IOV VF: No 00:07:54.977 Max Data Transfer Size: 524288 00:07:54.977 Max Number of Namespaces: 256 00:07:54.977 Max Number of I/O Queues: 64 00:07:54.977 NVMe Specification Version (VS): 1.4 00:07:54.977 NVMe Specification Version (Identify): 1.4 00:07:54.977 Maximum Queue Entries: 2048 00:07:54.977 Contiguous Queues Required: Yes 00:07:54.977 Arbitration Mechanisms Supported 00:07:54.977 Weighted Round Robin: Not Supported 00:07:54.977 Vendor Specific: Not Supported 00:07:54.977 Reset Timeout: 7500 ms 00:07:54.977 Doorbell Stride: 4 bytes 00:07:54.977 NVM Subsystem Reset: Not Supported 00:07:54.977 Command Sets Supported 00:07:54.977 NVM Command Set: Supported 00:07:54.977 Boot Partition: Not Supported 00:07:54.977 Memory Page Size Minimum: 4096 bytes 00:07:54.977 Memory Page Size Maximum: 65536 bytes 00:07:54.977 Persistent Memory Region: Not Supported 00:07:54.977 Optional Asynchronous Events Supported 00:07:54.977 Namespace Attribute Notices: Supported 00:07:54.977 Firmware Activation Notices: Not Supported 00:07:54.977 ANA Change Notices: Not Supported 00:07:54.977 PLE Aggregate Log Change Notices: Not Supported 00:07:54.977 LBA Status Info Alert Notices: Not Supported 00:07:54.977 EGE Aggregate Log Change Notices: Not Supported 00:07:54.977 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.977 Zone Descriptor Change Notices: Not Supported 00:07:54.977 Discovery Log Change Notices: Not Supported 00:07:54.977 Controller Attributes 00:07:54.977 128-bit Host Identifier: Not Supported 00:07:54.977 Non-Operational Permissive Mode: Not Supported 00:07:54.977 NVM Sets: Not Supported 00:07:54.977 Read Recovery Levels: Not Supported 00:07:54.977 Endurance Groups: Not Supported 00:07:54.977 Predictable Latency Mode: Not Supported 00:07:54.977 Traffic Based Keep ALive: Not Supported 00:07:54.977 Namespace Granularity: Not Supported 00:07:54.977 SQ Associations: Not Supported 00:07:54.977 UUID List: Not Supported 00:07:54.977 Multi-Domain Subsystem: Not Supported 00:07:54.977 Fixed Capacity Management: Not Supported 00:07:54.977 Variable Capacity Management: Not Supported 00:07:54.977 Delete Endurance Group: Not Supported 00:07:54.977 Delete NVM Set: Not Supported 00:07:54.977 Extended LBA Formats Supported: Supported 00:07:54.977 Flexible Data Placement Supported: Not Supported 00:07:54.977 00:07:54.977 Controller Memory Buffer Support 00:07:54.977 ================================ 00:07:54.977 Supported: No 00:07:54.977 00:07:54.977 Persistent Memory Region Support 00:07:54.977 ================================ 00:07:54.977 Supported: No 00:07:54.977 00:07:54.977 Admin Command Set Attributes 00:07:54.977 ============================ 00:07:54.977 Security Send/Receive: Not Supported 00:07:54.977 Format NVM: Supported 00:07:54.977 Firmware Activate/Download: Not Supported 00:07:54.977 Namespace Management: Supported 00:07:54.977 Device Self-Test: Not Supported 00:07:54.977 Directives: Supported 00:07:54.977 NVMe-MI: Not Supported 00:07:54.977 Virtualization Management: Not Supported 00:07:54.977 Doorbell Buffer Config: Supported 00:07:54.977 Get LBA Status Capability: Not Supported 00:07:54.977 Command & Feature Lockdown Capability: Not Supported 00:07:54.977 Abort Command Limit: 4 00:07:54.977 Async Event Request Limit: 4 00:07:54.977 Number of Firmware Slots: N/A 00:07:54.977 Firmware Slot 1 Read-Only: N/A 00:07:54.977 Firmware Activation Without Reset: N/A 00:07:54.977 Multiple Update Detection Support: N/A 00:07:54.977 Firmware Update Granularity: No Information Provided 00:07:54.977 Per-Namespace SMART Log: Yes 00:07:54.977 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.977 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:54.977 Command Effects Log Page: Supported 00:07:54.977 Get Log Page Extended Data: Supported 00:07:54.977 Telemetry Log Pages: Not Supported 00:07:54.977 Persistent Event Log Pages: Not Supported 00:07:54.977 Supported Log Pages Log Page: May Support 00:07:54.977 Commands Supported & Effects Log Page: Not Supported 00:07:54.977 Feature Identifiers & Effects Log Page:May Support 00:07:54.977 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.977 Data Area 4 for Telemetry Log: Not Supported 00:07:54.977 Error Log Page Entries Supported: 1 00:07:54.977 Keep Alive: Not Supported 00:07:54.977 00:07:54.977 NVM Command Set Attributes 00:07:54.977 ========================== 00:07:54.977 Submission Queue Entry Size 00:07:54.977 Max: 64 00:07:54.977 Min: 64 00:07:54.977 Completion Queue Entry Size 00:07:54.977 Max: 16 00:07:54.977 Min: 16 00:07:54.977 Number of Namespaces: 256 00:07:54.977 Compare Command: Supported 00:07:54.977 Write Uncorrectable Command: Not Supported 00:07:54.977 Dataset Management Command: Supported 00:07:54.977 Write Zeroes Command: Supported 00:07:54.977 Set Features Save Field: Supported 00:07:54.977 Reservations: Not Supported 00:07:54.977 Timestamp: Supported 00:07:54.977 Copy: Supported 00:07:54.977 Volatile Write Cache: Present 00:07:54.977 Atomic Write Unit (Normal): 1 00:07:54.977 Atomic Write Unit (PFail): 1 00:07:54.977 Atomic Compare & Write Unit: 1 00:07:54.977 Fused Compare & Write: Not Supported 00:07:54.977 Scatter-Gather List 00:07:54.977 SGL Command Set: Supported 00:07:54.977 SGL Keyed: Not Supported 00:07:54.977 SGL Bit Bucket Descriptor: Not Supported 00:07:54.977 SGL Metadata Pointer: Not Supported 00:07:54.977 Oversized SGL: Not Supported 00:07:54.977 SGL Metadata Address: Not Supported 00:07:54.977 SGL Offset: Not Supported 00:07:54.977 Transport SGL Data Block: Not Supported 00:07:54.977 Replay Protected Memory Block: Not Supported 00:07:54.977 00:07:54.977 Firmware Slot Information 00:07:54.978 ========================= 00:07:54.978 Active slot: 1 00:07:54.978 Slot 1 Firmware Revision: 1.0 00:07:54.978 00:07:54.978 00:07:54.978 Commands Supported and Effects 00:07:54.978 ============================== 00:07:54.978 Admin Commands 00:07:54.978 -------------- 00:07:54.978 Delete I/O Submission Queue (00h): Supported 00:07:54.978 Create I/O Submission Queue (01h): Supported 00:07:54.978 Get Log Page (02h): Supported 00:07:54.978 Delete I/O Completion Queue (04h): Supported 00:07:54.978 Create I/O Completion Queue (05h): Supported 00:07:54.978 Identify (06h): Supported 00:07:54.978 Abort (08h): Supported 00:07:54.978 Set Features (09h): Supported 00:07:54.978 Get Features (0Ah): Supported 00:07:54.978 Asynchronous Event Request (0Ch): Supported 00:07:54.978 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.978 Directive Send (19h): Supported 00:07:54.978 Directive Receive (1Ah): Supported 00:07:54.978 Virtualization Management (1Ch): Supported 00:07:54.978 Doorbell Buffer Config (7Ch): Supported 00:07:54.978 Format NVM (80h): Supported LBA-Change 00:07:54.978 I/O Commands 00:07:54.978 ------------ 00:07:54.978 Flush (00h): Supported LBA-Change 00:07:54.978 Write (01h): Supported LBA-Change 00:07:54.978 Read (02h): Supported 00:07:54.978 Compare (05h): Supported 00:07:54.978 Write Zeroes (08h): Supported LBA-Change 00:07:54.978 Dataset Management (09h): Supported LBA-Change 00:07:54.978 Unknown (0Ch): Supported 00:07:54.978 Unknown (12h): Supported 00:07:54.978 Copy (19h): Supported LBA-Change 00:07:54.978 Unknown (1Dh): Supported LBA-Change 00:07:54.978 00:07:54.978 Error Log 00:07:54.978 ========= 00:07:54.978 00:07:54.978 Arbitration 00:07:54.978 =========== 00:07:54.978 Arbitration Burst: no limit 00:07:54.978 00:07:54.978 Power Management 00:07:54.978 ================ 00:07:54.978 Number of Power States: 1 00:07:54.978 Current Power State: Power State #0 00:07:54.978 Power State #0: 00:07:54.978 Max Power: 25.00 W 00:07:54.978 Non-Operational State: Operational 00:07:54.978 Entry Latency: 16 microseconds 00:07:54.978 Exit Latency: 4 microseconds 00:07:54.978 Relative Read Throughput: 0 00:07:54.978 Relative Read Latency: 0 00:07:54.978 Relative Write Throughput: 0 00:07:54.978 Relative Write Latency: 0 00:07:54.978 Idle Power: Not Reported 00:07:54.978 Active Power: Not Reported 00:07:54.978 Non-Operational Permissive Mode: Not Supported 00:07:54.978 00:07:54.978 Health Information 00:07:54.978 ================== 00:07:54.978 Critical Warnings: 00:07:54.978 Available Spare Space: OK 00:07:54.978 Temperature: OK 00:07:54.978 Device Reliability: OK 00:07:54.978 Read Only: No 00:07:54.978 Volatile Memory Backup: OK 00:07:54.978 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.978 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.978 Available Spare: 0% 00:07:54.978 Available Spare Threshold: 0% 00:07:54.978 Life Percentage Used: 0% 00:07:54.978 Data Units Read: 1071 00:07:54.978 Data Units Written: 935 00:07:54.978 Host Read Commands: 56829 00:07:54.978 Host Write Commands: 55554 00:07:54.978 Controller Busy Time: 0 minutes 00:07:54.978 Power Cycles: 0 00:07:54.978 Power On Hours: 0 hours 00:07:54.978 Unsafe Shutdowns: 0 00:07:54.978 Unrecoverable Media Errors: 0 00:07:54.978 Lifetime Error Log Entries: 0 00:07:54.978 Warning Temperature Time: 0 minutes 00:07:54.978 Critical Temperature Time: 0 minutes 00:07:54.978 00:07:54.978 Number of Queues 00:07:54.978 ================ 00:07:54.978 Number of I/O Submission Queues: 64 00:07:54.978 Number of I/O Completion Queues: 64 00:07:54.978 00:07:54.978 ZNS Specific Controller Data 00:07:54.978 ============================ 00:07:54.978 Zone Append Size Limit: 0 00:07:54.978 00:07:54.978 00:07:54.978 Active Namespaces 00:07:54.978 ================= 00:07:54.978 Namespace ID:1 00:07:54.978 Error Recovery Timeout: Unlimited 00:07:54.978 Command Set Identifier: NVM (00h) 00:07:54.978 Deallocate: Supported 00:07:54.978 Deallocated/Unwritten Error: Supported 00:07:54.978 Deallocated Read Value: All 0x00 00:07:54.978 Deallocate in Write Zeroes: Not Supported 00:07:54.978 Deallocated Guard Field: 0xFFFF 00:07:54.978 Flush: Supported 00:07:54.978 Reservation: Not Supported 00:07:54.978 Namespace Sharing Capabilities: Private 00:07:54.978 Size (in LBAs): 1310720 (5GiB) 00:07:54.978 Capacity (in LBAs): 1310720 (5GiB) 00:07:54.978 Utilization (in LBAs): 1310720 (5GiB) 00:07:54.978 Thin Provisioning: Not Supported 00:07:54.978 Per-NS Atomic Units: No 00:07:54.978 Maximum Single Source Range Length: 128 00:07:54.978 Maximum Copy Length: 128 00:07:54.978 Maximum Source Range Count: 128 00:07:54.978 NGUID/EUI64 Never Reused: No 00:07:54.978 Namespace Write Protected: No 00:07:54.978 Number of LBA Formats: 8 00:07:54.978 Current LBA Format: LBA Format #04 00:07:54.978 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.978 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.978 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.978 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.978 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.978 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.978 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.978 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.978 00:07:54.978 NVM Specific Namespace Data 00:07:54.978 =========================== 00:07:54.978 Logical Block Storage Tag Mask: 0 00:07:54.978 Protection Information Capabilities: 00:07:54.978 16b Guard Protection Information Storage Tag Support: No 00:07:54.978 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.978 Storage Tag Check Read Support: No 00:07:54.978 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.978 19:51:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:54.978 19:51:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:54.978 ===================================================== 00:07:54.978 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.978 ===================================================== 00:07:54.978 Controller Capabilities/Features 00:07:54.978 ================================ 00:07:54.978 Vendor ID: 1b36 00:07:54.978 Subsystem Vendor ID: 1af4 00:07:54.978 Serial Number: 12342 00:07:54.978 Model Number: QEMU NVMe Ctrl 00:07:54.978 Firmware Version: 8.0.0 00:07:54.978 Recommended Arb Burst: 6 00:07:54.978 IEEE OUI Identifier: 00 54 52 00:07:54.978 Multi-path I/O 00:07:54.978 May have multiple subsystem ports: No 00:07:54.978 May have multiple controllers: No 00:07:54.978 Associated with SR-IOV VF: No 00:07:54.978 Max Data Transfer Size: 524288 00:07:54.978 Max Number of Namespaces: 256 00:07:54.978 Max Number of I/O Queues: 64 00:07:54.978 NVMe Specification Version (VS): 1.4 00:07:54.978 NVMe Specification Version (Identify): 1.4 00:07:54.978 Maximum Queue Entries: 2048 00:07:54.978 Contiguous Queues Required: Yes 00:07:54.978 Arbitration Mechanisms Supported 00:07:54.978 Weighted Round Robin: Not Supported 00:07:54.978 Vendor Specific: Not Supported 00:07:54.978 Reset Timeout: 7500 ms 00:07:54.978 Doorbell Stride: 4 bytes 00:07:54.978 NVM Subsystem Reset: Not Supported 00:07:54.978 Command Sets Supported 00:07:54.978 NVM Command Set: Supported 00:07:54.978 Boot Partition: Not Supported 00:07:54.978 Memory Page Size Minimum: 4096 bytes 00:07:54.978 Memory Page Size Maximum: 65536 bytes 00:07:54.978 Persistent Memory Region: Not Supported 00:07:54.978 Optional Asynchronous Events Supported 00:07:54.978 Namespace Attribute Notices: Supported 00:07:54.978 Firmware Activation Notices: Not Supported 00:07:54.978 ANA Change Notices: Not Supported 00:07:54.978 PLE Aggregate Log Change Notices: Not Supported 00:07:54.978 LBA Status Info Alert Notices: Not Supported 00:07:54.978 EGE Aggregate Log Change Notices: Not Supported 00:07:54.978 Normal NVM Subsystem Shutdown event: Not Supported 00:07:54.978 Zone Descriptor Change Notices: Not Supported 00:07:54.978 Discovery Log Change Notices: Not Supported 00:07:54.978 Controller Attributes 00:07:54.978 128-bit Host Identifier: Not Supported 00:07:54.978 Non-Operational Permissive Mode: Not Supported 00:07:54.978 NVM Sets: Not Supported 00:07:54.978 Read Recovery Levels: Not Supported 00:07:54.978 Endurance Groups: Not Supported 00:07:54.978 Predictable Latency Mode: Not Supported 00:07:54.979 Traffic Based Keep ALive: Not Supported 00:07:54.979 Namespace Granularity: Not Supported 00:07:54.979 SQ Associations: Not Supported 00:07:54.979 UUID List: Not Supported 00:07:54.979 Multi-Domain Subsystem: Not Supported 00:07:54.979 Fixed Capacity Management: Not Supported 00:07:54.979 Variable Capacity Management: Not Supported 00:07:54.979 Delete Endurance Group: Not Supported 00:07:54.979 Delete NVM Set: Not Supported 00:07:54.979 Extended LBA Formats Supported: Supported 00:07:54.979 Flexible Data Placement Supported: Not Supported 00:07:54.979 00:07:54.979 Controller Memory Buffer Support 00:07:54.979 ================================ 00:07:54.979 Supported: No 00:07:54.979 00:07:54.979 Persistent Memory Region Support 00:07:54.979 ================================ 00:07:54.979 Supported: No 00:07:54.979 00:07:54.979 Admin Command Set Attributes 00:07:54.979 ============================ 00:07:54.979 Security Send/Receive: Not Supported 00:07:54.979 Format NVM: Supported 00:07:54.979 Firmware Activate/Download: Not Supported 00:07:54.979 Namespace Management: Supported 00:07:54.979 Device Self-Test: Not Supported 00:07:54.979 Directives: Supported 00:07:54.979 NVMe-MI: Not Supported 00:07:54.979 Virtualization Management: Not Supported 00:07:54.979 Doorbell Buffer Config: Supported 00:07:54.979 Get LBA Status Capability: Not Supported 00:07:54.979 Command & Feature Lockdown Capability: Not Supported 00:07:54.979 Abort Command Limit: 4 00:07:54.979 Async Event Request Limit: 4 00:07:54.979 Number of Firmware Slots: N/A 00:07:54.979 Firmware Slot 1 Read-Only: N/A 00:07:54.979 Firmware Activation Without Reset: N/A 00:07:54.979 Multiple Update Detection Support: N/A 00:07:54.979 Firmware Update Granularity: No Information Provided 00:07:54.979 Per-Namespace SMART Log: Yes 00:07:54.979 Asymmetric Namespace Access Log Page: Not Supported 00:07:54.979 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:54.979 Command Effects Log Page: Supported 00:07:54.979 Get Log Page Extended Data: Supported 00:07:54.979 Telemetry Log Pages: Not Supported 00:07:54.979 Persistent Event Log Pages: Not Supported 00:07:54.979 Supported Log Pages Log Page: May Support 00:07:54.979 Commands Supported & Effects Log Page: Not Supported 00:07:54.979 Feature Identifiers & Effects Log Page:May Support 00:07:54.979 NVMe-MI Commands & Effects Log Page: May Support 00:07:54.979 Data Area 4 for Telemetry Log: Not Supported 00:07:54.979 Error Log Page Entries Supported: 1 00:07:54.979 Keep Alive: Not Supported 00:07:54.979 00:07:54.979 NVM Command Set Attributes 00:07:54.979 ========================== 00:07:54.979 Submission Queue Entry Size 00:07:54.979 Max: 64 00:07:54.979 Min: 64 00:07:54.979 Completion Queue Entry Size 00:07:54.979 Max: 16 00:07:54.979 Min: 16 00:07:54.979 Number of Namespaces: 256 00:07:54.979 Compare Command: Supported 00:07:54.979 Write Uncorrectable Command: Not Supported 00:07:54.979 Dataset Management Command: Supported 00:07:54.979 Write Zeroes Command: Supported 00:07:54.979 Set Features Save Field: Supported 00:07:54.979 Reservations: Not Supported 00:07:54.979 Timestamp: Supported 00:07:54.979 Copy: Supported 00:07:54.979 Volatile Write Cache: Present 00:07:54.979 Atomic Write Unit (Normal): 1 00:07:54.979 Atomic Write Unit (PFail): 1 00:07:54.979 Atomic Compare & Write Unit: 1 00:07:54.979 Fused Compare & Write: Not Supported 00:07:54.979 Scatter-Gather List 00:07:54.979 SGL Command Set: Supported 00:07:54.979 SGL Keyed: Not Supported 00:07:54.979 SGL Bit Bucket Descriptor: Not Supported 00:07:54.979 SGL Metadata Pointer: Not Supported 00:07:54.979 Oversized SGL: Not Supported 00:07:54.979 SGL Metadata Address: Not Supported 00:07:54.979 SGL Offset: Not Supported 00:07:54.979 Transport SGL Data Block: Not Supported 00:07:54.979 Replay Protected Memory Block: Not Supported 00:07:54.979 00:07:54.979 Firmware Slot Information 00:07:54.979 ========================= 00:07:54.979 Active slot: 1 00:07:54.979 Slot 1 Firmware Revision: 1.0 00:07:54.979 00:07:54.979 00:07:54.979 Commands Supported and Effects 00:07:54.979 ============================== 00:07:54.979 Admin Commands 00:07:54.979 -------------- 00:07:54.979 Delete I/O Submission Queue (00h): Supported 00:07:54.979 Create I/O Submission Queue (01h): Supported 00:07:54.979 Get Log Page (02h): Supported 00:07:54.979 Delete I/O Completion Queue (04h): Supported 00:07:54.979 Create I/O Completion Queue (05h): Supported 00:07:54.979 Identify (06h): Supported 00:07:54.979 Abort (08h): Supported 00:07:54.979 Set Features (09h): Supported 00:07:54.979 Get Features (0Ah): Supported 00:07:54.979 Asynchronous Event Request (0Ch): Supported 00:07:54.979 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:54.979 Directive Send (19h): Supported 00:07:54.979 Directive Receive (1Ah): Supported 00:07:54.979 Virtualization Management (1Ch): Supported 00:07:54.979 Doorbell Buffer Config (7Ch): Supported 00:07:54.979 Format NVM (80h): Supported LBA-Change 00:07:54.979 I/O Commands 00:07:54.979 ------------ 00:07:54.979 Flush (00h): Supported LBA-Change 00:07:54.979 Write (01h): Supported LBA-Change 00:07:54.979 Read (02h): Supported 00:07:54.979 Compare (05h): Supported 00:07:54.979 Write Zeroes (08h): Supported LBA-Change 00:07:54.979 Dataset Management (09h): Supported LBA-Change 00:07:54.979 Unknown (0Ch): Supported 00:07:54.979 Unknown (12h): Supported 00:07:54.979 Copy (19h): Supported LBA-Change 00:07:54.979 Unknown (1Dh): Supported LBA-Change 00:07:54.979 00:07:54.979 Error Log 00:07:54.979 ========= 00:07:54.979 00:07:54.979 Arbitration 00:07:54.979 =========== 00:07:54.979 Arbitration Burst: no limit 00:07:54.979 00:07:54.979 Power Management 00:07:54.979 ================ 00:07:54.979 Number of Power States: 1 00:07:54.979 Current Power State: Power State #0 00:07:54.979 Power State #0: 00:07:54.979 Max Power: 25.00 W 00:07:54.979 Non-Operational State: Operational 00:07:54.979 Entry Latency: 16 microseconds 00:07:54.979 Exit Latency: 4 microseconds 00:07:54.979 Relative Read Throughput: 0 00:07:54.979 Relative Read Latency: 0 00:07:54.979 Relative Write Throughput: 0 00:07:54.979 Relative Write Latency: 0 00:07:54.979 Idle Power: Not Reported 00:07:54.979 Active Power: Not Reported 00:07:54.979 Non-Operational Permissive Mode: Not Supported 00:07:54.979 00:07:54.979 Health Information 00:07:54.979 ================== 00:07:54.979 Critical Warnings: 00:07:54.979 Available Spare Space: OK 00:07:54.979 Temperature: OK 00:07:54.979 Device Reliability: OK 00:07:54.979 Read Only: No 00:07:54.979 Volatile Memory Backup: OK 00:07:54.979 Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.979 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:54.979 Available Spare: 0% 00:07:54.979 Available Spare Threshold: 0% 00:07:54.979 Life Percentage Used: 0% 00:07:54.979 Data Units Read: 2257 00:07:54.979 Data Units Written: 2045 00:07:54.979 Host Read Commands: 117296 00:07:54.979 Host Write Commands: 115565 00:07:54.979 Controller Busy Time: 0 minutes 00:07:54.979 Power Cycles: 0 00:07:54.979 Power On Hours: 0 hours 00:07:54.979 Unsafe Shutdowns: 0 00:07:54.979 Unrecoverable Media Errors: 0 00:07:54.979 Lifetime Error Log Entries: 0 00:07:54.979 Warning Temperature Time: 0 minutes 00:07:54.979 Critical Temperature Time: 0 minutes 00:07:54.979 00:07:54.979 Number of Queues 00:07:54.979 ================ 00:07:54.979 Number of I/O Submission Queues: 64 00:07:54.979 Number of I/O Completion Queues: 64 00:07:54.979 00:07:54.979 ZNS Specific Controller Data 00:07:54.979 ============================ 00:07:54.979 Zone Append Size Limit: 0 00:07:54.979 00:07:54.979 00:07:54.979 Active Namespaces 00:07:54.979 ================= 00:07:54.979 Namespace ID:1 00:07:54.979 Error Recovery Timeout: Unlimited 00:07:54.979 Command Set Identifier: NVM (00h) 00:07:54.979 Deallocate: Supported 00:07:54.979 Deallocated/Unwritten Error: Supported 00:07:54.979 Deallocated Read Value: All 0x00 00:07:54.979 Deallocate in Write Zeroes: Not Supported 00:07:54.979 Deallocated Guard Field: 0xFFFF 00:07:54.979 Flush: Supported 00:07:54.979 Reservation: Not Supported 00:07:54.979 Namespace Sharing Capabilities: Private 00:07:54.979 Size (in LBAs): 1048576 (4GiB) 00:07:54.979 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.979 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.979 Thin Provisioning: Not Supported 00:07:54.979 Per-NS Atomic Units: No 00:07:54.979 Maximum Single Source Range Length: 128 00:07:54.979 Maximum Copy Length: 128 00:07:54.979 Maximum Source Range Count: 128 00:07:54.979 NGUID/EUI64 Never Reused: No 00:07:54.979 Namespace Write Protected: No 00:07:54.979 Number of LBA Formats: 8 00:07:54.979 Current LBA Format: LBA Format #04 00:07:54.979 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.979 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.980 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.980 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.980 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.980 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.980 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.980 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.980 00:07:54.980 NVM Specific Namespace Data 00:07:54.980 =========================== 00:07:54.980 Logical Block Storage Tag Mask: 0 00:07:54.980 Protection Information Capabilities: 00:07:54.980 16b Guard Protection Information Storage Tag Support: No 00:07:54.980 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.980 Storage Tag Check Read Support: No 00:07:54.980 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Namespace ID:2 00:07:54.980 Error Recovery Timeout: Unlimited 00:07:54.980 Command Set Identifier: NVM (00h) 00:07:54.980 Deallocate: Supported 00:07:54.980 Deallocated/Unwritten Error: Supported 00:07:54.980 Deallocated Read Value: All 0x00 00:07:54.980 Deallocate in Write Zeroes: Not Supported 00:07:54.980 Deallocated Guard Field: 0xFFFF 00:07:54.980 Flush: Supported 00:07:54.980 Reservation: Not Supported 00:07:54.980 Namespace Sharing Capabilities: Private 00:07:54.980 Size (in LBAs): 1048576 (4GiB) 00:07:54.980 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.980 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.980 Thin Provisioning: Not Supported 00:07:54.980 Per-NS Atomic Units: No 00:07:54.980 Maximum Single Source Range Length: 128 00:07:54.980 Maximum Copy Length: 128 00:07:54.980 Maximum Source Range Count: 128 00:07:54.980 NGUID/EUI64 Never Reused: No 00:07:54.980 Namespace Write Protected: No 00:07:54.980 Number of LBA Formats: 8 00:07:54.980 Current LBA Format: LBA Format #04 00:07:54.980 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.980 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.980 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.980 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.980 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.980 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.980 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.980 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.980 00:07:54.980 NVM Specific Namespace Data 00:07:54.980 =========================== 00:07:54.980 Logical Block Storage Tag Mask: 0 00:07:54.980 Protection Information Capabilities: 00:07:54.980 16b Guard Protection Information Storage Tag Support: No 00:07:54.980 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:54.980 Storage Tag Check Read Support: No 00:07:54.980 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:54.980 Namespace ID:3 00:07:54.980 Error Recovery Timeout: Unlimited 00:07:54.980 Command Set Identifier: NVM (00h) 00:07:54.980 Deallocate: Supported 00:07:54.980 Deallocated/Unwritten Error: Supported 00:07:54.980 Deallocated Read Value: All 0x00 00:07:54.980 Deallocate in Write Zeroes: Not Supported 00:07:54.980 Deallocated Guard Field: 0xFFFF 00:07:54.980 Flush: Supported 00:07:54.980 Reservation: Not Supported 00:07:54.980 Namespace Sharing Capabilities: Private 00:07:54.980 Size (in LBAs): 1048576 (4GiB) 00:07:54.980 Capacity (in LBAs): 1048576 (4GiB) 00:07:54.980 Utilization (in LBAs): 1048576 (4GiB) 00:07:54.980 Thin Provisioning: Not Supported 00:07:54.980 Per-NS Atomic Units: No 00:07:54.980 Maximum Single Source Range Length: 128 00:07:54.980 Maximum Copy Length: 128 00:07:54.980 Maximum Source Range Count: 128 00:07:54.980 NGUID/EUI64 Never Reused: No 00:07:54.980 Namespace Write Protected: No 00:07:54.980 Number of LBA Formats: 8 00:07:54.980 Current LBA Format: LBA Format #04 00:07:54.980 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:54.980 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:54.980 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:54.980 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:54.980 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:54.980 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:54.980 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:54.980 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:54.980 00:07:54.980 NVM Specific Namespace Data 00:07:54.980 =========================== 00:07:54.980 Logical Block Storage Tag Mask: 0 00:07:54.980 Protection Information Capabilities: 00:07:54.980 16b Guard Protection Information Storage Tag Support: No 00:07:54.980 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.239 Storage Tag Check Read Support: No 00:07:55.239 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.239 19:51:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:55.239 19:51:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:55.239 ===================================================== 00:07:55.239 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.239 ===================================================== 00:07:55.239 Controller Capabilities/Features 00:07:55.239 ================================ 00:07:55.239 Vendor ID: 1b36 00:07:55.239 Subsystem Vendor ID: 1af4 00:07:55.239 Serial Number: 12343 00:07:55.239 Model Number: QEMU NVMe Ctrl 00:07:55.239 Firmware Version: 8.0.0 00:07:55.239 Recommended Arb Burst: 6 00:07:55.239 IEEE OUI Identifier: 00 54 52 00:07:55.239 Multi-path I/O 00:07:55.239 May have multiple subsystem ports: No 00:07:55.239 May have multiple controllers: Yes 00:07:55.239 Associated with SR-IOV VF: No 00:07:55.239 Max Data Transfer Size: 524288 00:07:55.239 Max Number of Namespaces: 256 00:07:55.239 Max Number of I/O Queues: 64 00:07:55.239 NVMe Specification Version (VS): 1.4 00:07:55.239 NVMe Specification Version (Identify): 1.4 00:07:55.239 Maximum Queue Entries: 2048 00:07:55.239 Contiguous Queues Required: Yes 00:07:55.239 Arbitration Mechanisms Supported 00:07:55.239 Weighted Round Robin: Not Supported 00:07:55.239 Vendor Specific: Not Supported 00:07:55.239 Reset Timeout: 7500 ms 00:07:55.239 Doorbell Stride: 4 bytes 00:07:55.239 NVM Subsystem Reset: Not Supported 00:07:55.239 Command Sets Supported 00:07:55.239 NVM Command Set: Supported 00:07:55.239 Boot Partition: Not Supported 00:07:55.239 Memory Page Size Minimum: 4096 bytes 00:07:55.239 Memory Page Size Maximum: 65536 bytes 00:07:55.239 Persistent Memory Region: Not Supported 00:07:55.239 Optional Asynchronous Events Supported 00:07:55.239 Namespace Attribute Notices: Supported 00:07:55.239 Firmware Activation Notices: Not Supported 00:07:55.239 ANA Change Notices: Not Supported 00:07:55.239 PLE Aggregate Log Change Notices: Not Supported 00:07:55.239 LBA Status Info Alert Notices: Not Supported 00:07:55.239 EGE Aggregate Log Change Notices: Not Supported 00:07:55.239 Normal NVM Subsystem Shutdown event: Not Supported 00:07:55.239 Zone Descriptor Change Notices: Not Supported 00:07:55.239 Discovery Log Change Notices: Not Supported 00:07:55.239 Controller Attributes 00:07:55.239 128-bit Host Identifier: Not Supported 00:07:55.239 Non-Operational Permissive Mode: Not Supported 00:07:55.239 NVM Sets: Not Supported 00:07:55.239 Read Recovery Levels: Not Supported 00:07:55.239 Endurance Groups: Supported 00:07:55.239 Predictable Latency Mode: Not Supported 00:07:55.239 Traffic Based Keep ALive: Not Supported 00:07:55.239 Namespace Granularity: Not Supported 00:07:55.239 SQ Associations: Not Supported 00:07:55.239 UUID List: Not Supported 00:07:55.239 Multi-Domain Subsystem: Not Supported 00:07:55.239 Fixed Capacity Management: Not Supported 00:07:55.239 Variable Capacity Management: Not Supported 00:07:55.239 Delete Endurance Group: Not Supported 00:07:55.239 Delete NVM Set: Not Supported 00:07:55.239 Extended LBA Formats Supported: Supported 00:07:55.239 Flexible Data Placement Supported: Supported 00:07:55.239 00:07:55.239 Controller Memory Buffer Support 00:07:55.239 ================================ 00:07:55.239 Supported: No 00:07:55.239 00:07:55.239 Persistent Memory Region Support 00:07:55.239 ================================ 00:07:55.239 Supported: No 00:07:55.239 00:07:55.239 Admin Command Set Attributes 00:07:55.239 ============================ 00:07:55.239 Security Send/Receive: Not Supported 00:07:55.239 Format NVM: Supported 00:07:55.239 Firmware Activate/Download: Not Supported 00:07:55.239 Namespace Management: Supported 00:07:55.239 Device Self-Test: Not Supported 00:07:55.239 Directives: Supported 00:07:55.239 NVMe-MI: Not Supported 00:07:55.239 Virtualization Management: Not Supported 00:07:55.239 Doorbell Buffer Config: Supported 00:07:55.239 Get LBA Status Capability: Not Supported 00:07:55.239 Command & Feature Lockdown Capability: Not Supported 00:07:55.239 Abort Command Limit: 4 00:07:55.239 Async Event Request Limit: 4 00:07:55.239 Number of Firmware Slots: N/A 00:07:55.239 Firmware Slot 1 Read-Only: N/A 00:07:55.239 Firmware Activation Without Reset: N/A 00:07:55.239 Multiple Update Detection Support: N/A 00:07:55.239 Firmware Update Granularity: No Information Provided 00:07:55.239 Per-Namespace SMART Log: Yes 00:07:55.239 Asymmetric Namespace Access Log Page: Not Supported 00:07:55.239 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:55.239 Command Effects Log Page: Supported 00:07:55.239 Get Log Page Extended Data: Supported 00:07:55.239 Telemetry Log Pages: Not Supported 00:07:55.239 Persistent Event Log Pages: Not Supported 00:07:55.239 Supported Log Pages Log Page: May Support 00:07:55.239 Commands Supported & Effects Log Page: Not Supported 00:07:55.239 Feature Identifiers & Effects Log Page:May Support 00:07:55.239 NVMe-MI Commands & Effects Log Page: May Support 00:07:55.239 Data Area 4 for Telemetry Log: Not Supported 00:07:55.239 Error Log Page Entries Supported: 1 00:07:55.239 Keep Alive: Not Supported 00:07:55.239 00:07:55.239 NVM Command Set Attributes 00:07:55.239 ========================== 00:07:55.239 Submission Queue Entry Size 00:07:55.239 Max: 64 00:07:55.239 Min: 64 00:07:55.239 Completion Queue Entry Size 00:07:55.239 Max: 16 00:07:55.239 Min: 16 00:07:55.239 Number of Namespaces: 256 00:07:55.239 Compare Command: Supported 00:07:55.239 Write Uncorrectable Command: Not Supported 00:07:55.239 Dataset Management Command: Supported 00:07:55.239 Write Zeroes Command: Supported 00:07:55.239 Set Features Save Field: Supported 00:07:55.239 Reservations: Not Supported 00:07:55.239 Timestamp: Supported 00:07:55.239 Copy: Supported 00:07:55.239 Volatile Write Cache: Present 00:07:55.239 Atomic Write Unit (Normal): 1 00:07:55.239 Atomic Write Unit (PFail): 1 00:07:55.239 Atomic Compare & Write Unit: 1 00:07:55.239 Fused Compare & Write: Not Supported 00:07:55.239 Scatter-Gather List 00:07:55.239 SGL Command Set: Supported 00:07:55.239 SGL Keyed: Not Supported 00:07:55.239 SGL Bit Bucket Descriptor: Not Supported 00:07:55.239 SGL Metadata Pointer: Not Supported 00:07:55.239 Oversized SGL: Not Supported 00:07:55.239 SGL Metadata Address: Not Supported 00:07:55.239 SGL Offset: Not Supported 00:07:55.239 Transport SGL Data Block: Not Supported 00:07:55.239 Replay Protected Memory Block: Not Supported 00:07:55.239 00:07:55.239 Firmware Slot Information 00:07:55.239 ========================= 00:07:55.240 Active slot: 1 00:07:55.240 Slot 1 Firmware Revision: 1.0 00:07:55.240 00:07:55.240 00:07:55.240 Commands Supported and Effects 00:07:55.240 ============================== 00:07:55.240 Admin Commands 00:07:55.240 -------------- 00:07:55.240 Delete I/O Submission Queue (00h): Supported 00:07:55.240 Create I/O Submission Queue (01h): Supported 00:07:55.240 Get Log Page (02h): Supported 00:07:55.240 Delete I/O Completion Queue (04h): Supported 00:07:55.240 Create I/O Completion Queue (05h): Supported 00:07:55.240 Identify (06h): Supported 00:07:55.240 Abort (08h): Supported 00:07:55.240 Set Features (09h): Supported 00:07:55.240 Get Features (0Ah): Supported 00:07:55.240 Asynchronous Event Request (0Ch): Supported 00:07:55.240 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:55.240 Directive Send (19h): Supported 00:07:55.240 Directive Receive (1Ah): Supported 00:07:55.240 Virtualization Management (1Ch): Supported 00:07:55.240 Doorbell Buffer Config (7Ch): Supported 00:07:55.240 Format NVM (80h): Supported LBA-Change 00:07:55.240 I/O Commands 00:07:55.240 ------------ 00:07:55.240 Flush (00h): Supported LBA-Change 00:07:55.240 Write (01h): Supported LBA-Change 00:07:55.240 Read (02h): Supported 00:07:55.240 Compare (05h): Supported 00:07:55.240 Write Zeroes (08h): Supported LBA-Change 00:07:55.240 Dataset Management (09h): Supported LBA-Change 00:07:55.240 Unknown (0Ch): Supported 00:07:55.240 Unknown (12h): Supported 00:07:55.240 Copy (19h): Supported LBA-Change 00:07:55.240 Unknown (1Dh): Supported LBA-Change 00:07:55.240 00:07:55.240 Error Log 00:07:55.240 ========= 00:07:55.240 00:07:55.240 Arbitration 00:07:55.240 =========== 00:07:55.240 Arbitration Burst: no limit 00:07:55.240 00:07:55.240 Power Management 00:07:55.240 ================ 00:07:55.240 Number of Power States: 1 00:07:55.240 Current Power State: Power State #0 00:07:55.240 Power State #0: 00:07:55.240 Max Power: 25.00 W 00:07:55.240 Non-Operational State: Operational 00:07:55.240 Entry Latency: 16 microseconds 00:07:55.240 Exit Latency: 4 microseconds 00:07:55.240 Relative Read Throughput: 0 00:07:55.240 Relative Read Latency: 0 00:07:55.240 Relative Write Throughput: 0 00:07:55.240 Relative Write Latency: 0 00:07:55.240 Idle Power: Not Reported 00:07:55.240 Active Power: Not Reported 00:07:55.240 Non-Operational Permissive Mode: Not Supported 00:07:55.240 00:07:55.240 Health Information 00:07:55.240 ================== 00:07:55.240 Critical Warnings: 00:07:55.240 Available Spare Space: OK 00:07:55.240 Temperature: OK 00:07:55.240 Device Reliability: OK 00:07:55.240 Read Only: No 00:07:55.240 Volatile Memory Backup: OK 00:07:55.240 Current Temperature: 323 Kelvin (50 Celsius) 00:07:55.240 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:55.240 Available Spare: 0% 00:07:55.240 Available Spare Threshold: 0% 00:07:55.240 Life Percentage Used: 0% 00:07:55.240 Data Units Read: 940 00:07:55.240 Data Units Written: 869 00:07:55.240 Host Read Commands: 40724 00:07:55.240 Host Write Commands: 40147 00:07:55.240 Controller Busy Time: 0 minutes 00:07:55.240 Power Cycles: 0 00:07:55.240 Power On Hours: 0 hours 00:07:55.240 Unsafe Shutdowns: 0 00:07:55.240 Unrecoverable Media Errors: 0 00:07:55.240 Lifetime Error Log Entries: 0 00:07:55.240 Warning Temperature Time: 0 minutes 00:07:55.240 Critical Temperature Time: 0 minutes 00:07:55.240 00:07:55.240 Number of Queues 00:07:55.240 ================ 00:07:55.240 Number of I/O Submission Queues: 64 00:07:55.240 Number of I/O Completion Queues: 64 00:07:55.240 00:07:55.240 ZNS Specific Controller Data 00:07:55.240 ============================ 00:07:55.240 Zone Append Size Limit: 0 00:07:55.240 00:07:55.240 00:07:55.240 Active Namespaces 00:07:55.240 ================= 00:07:55.240 Namespace ID:1 00:07:55.240 Error Recovery Timeout: Unlimited 00:07:55.240 Command Set Identifier: NVM (00h) 00:07:55.240 Deallocate: Supported 00:07:55.240 Deallocated/Unwritten Error: Supported 00:07:55.240 Deallocated Read Value: All 0x00 00:07:55.240 Deallocate in Write Zeroes: Not Supported 00:07:55.240 Deallocated Guard Field: 0xFFFF 00:07:55.240 Flush: Supported 00:07:55.240 Reservation: Not Supported 00:07:55.240 Namespace Sharing Capabilities: Multiple Controllers 00:07:55.240 Size (in LBAs): 262144 (1GiB) 00:07:55.240 Capacity (in LBAs): 262144 (1GiB) 00:07:55.240 Utilization (in LBAs): 262144 (1GiB) 00:07:55.240 Thin Provisioning: Not Supported 00:07:55.240 Per-NS Atomic Units: No 00:07:55.240 Maximum Single Source Range Length: 128 00:07:55.240 Maximum Copy Length: 128 00:07:55.240 Maximum Source Range Count: 128 00:07:55.240 NGUID/EUI64 Never Reused: No 00:07:55.240 Namespace Write Protected: No 00:07:55.240 Endurance group ID: 1 00:07:55.240 Number of LBA Formats: 8 00:07:55.240 Current LBA Format: LBA Format #04 00:07:55.240 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:55.240 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:55.240 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:55.240 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:55.240 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:55.240 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:55.240 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:55.240 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:55.240 00:07:55.240 Get Feature FDP: 00:07:55.240 ================ 00:07:55.240 Enabled: Yes 00:07:55.240 FDP configuration index: 0 00:07:55.240 00:07:55.240 FDP configurations log page 00:07:55.240 =========================== 00:07:55.240 Number of FDP configurations: 1 00:07:55.240 Version: 0 00:07:55.240 Size: 112 00:07:55.240 FDP Configuration Descriptor: 0 00:07:55.240 Descriptor Size: 96 00:07:55.240 Reclaim Group Identifier format: 2 00:07:55.240 FDP Volatile Write Cache: Not Present 00:07:55.240 FDP Configuration: Valid 00:07:55.240 Vendor Specific Size: 0 00:07:55.240 Number of Reclaim Groups: 2 00:07:55.240 Number of Recalim Unit Handles: 8 00:07:55.240 Max Placement Identifiers: 128 00:07:55.240 Number of Namespaces Suppprted: 256 00:07:55.240 Reclaim unit Nominal Size: 6000000 bytes 00:07:55.240 Estimated Reclaim Unit Time Limit: Not Reported 00:07:55.240 RUH Desc #000: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #001: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #002: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #003: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #004: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #005: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #006: RUH Type: Initially Isolated 00:07:55.240 RUH Desc #007: RUH Type: Initially Isolated 00:07:55.240 00:07:55.240 FDP reclaim unit handle usage log page 00:07:55.240 ====================================== 00:07:55.240 Number of Reclaim Unit Handles: 8 00:07:55.240 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:55.240 RUH Usage Desc #001: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #002: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #003: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #004: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #005: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #006: RUH Attributes: Unused 00:07:55.240 RUH Usage Desc #007: RUH Attributes: Unused 00:07:55.240 00:07:55.240 FDP statistics log page 00:07:55.240 ======================= 00:07:55.240 Host bytes with metadata written: 551854080 00:07:55.240 Media bytes with metadata written: 551931904 00:07:55.240 Media bytes erased: 0 00:07:55.240 00:07:55.240 FDP events log page 00:07:55.240 =================== 00:07:55.240 Number of FDP events: 0 00:07:55.240 00:07:55.240 NVM Specific Namespace Data 00:07:55.240 =========================== 00:07:55.240 Logical Block Storage Tag Mask: 0 00:07:55.240 Protection Information Capabilities: 00:07:55.240 16b Guard Protection Information Storage Tag Support: No 00:07:55.240 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:55.240 Storage Tag Check Read Support: No 00:07:55.240 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:55.240 ************************************ 00:07:55.240 END TEST nvme_identify 00:07:55.240 ************************************ 00:07:55.240 00:07:55.240 real 0m1.093s 00:07:55.240 user 0m0.390s 00:07:55.240 sys 0m0.498s 00:07:55.240 19:51:39 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.240 19:51:39 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:55.240 19:51:39 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:55.241 19:51:39 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.241 19:51:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.241 19:51:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.499 ************************************ 00:07:55.499 START TEST nvme_perf 00:07:55.499 ************************************ 00:07:55.499 19:51:39 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:55.499 19:51:39 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:56.482 Initializing NVMe Controllers 00:07:56.482 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.482 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.482 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.482 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.482 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:56.482 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:56.482 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:56.482 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:56.482 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:56.482 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:56.482 Initialization complete. Launching workers. 00:07:56.482 ======================================================== 00:07:56.482 Latency(us) 00:07:56.482 Device Information : IOPS MiB/s Average min max 00:07:56.482 PCIE (0000:00:10.0) NSID 1 from core 0: 18586.73 217.81 6895.32 5631.68 37048.23 00:07:56.482 PCIE (0000:00:11.0) NSID 1 from core 0: 18586.73 217.81 6886.25 5734.57 35298.41 00:07:56.482 PCIE (0000:00:13.0) NSID 1 from core 0: 18586.73 217.81 6875.91 5734.85 34917.00 00:07:56.482 PCIE (0000:00:12.0) NSID 1 from core 0: 18586.73 217.81 6865.54 5733.64 33566.73 00:07:56.482 PCIE (0000:00:12.0) NSID 2 from core 0: 18586.73 217.81 6855.10 5769.59 32273.47 00:07:56.482 PCIE (0000:00:12.0) NSID 3 from core 0: 18586.73 217.81 6844.72 5730.08 30921.90 00:07:56.482 ======================================================== 00:07:56.482 Total : 111520.40 1306.88 6870.47 5631.68 37048.23 00:07:56.482 00:07:56.482 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:56.482 ================================================================================= 00:07:56.482 1.00000% : 5797.415us 00:07:56.482 10.00000% : 5948.652us 00:07:56.482 25.00000% : 6150.302us 00:07:56.482 50.00000% : 6452.775us 00:07:56.482 75.00000% : 6755.249us 00:07:56.482 90.00000% : 8015.557us 00:07:56.482 95.00000% : 9628.751us 00:07:56.482 98.00000% : 11141.120us 00:07:56.482 99.00000% : 14518.745us 00:07:56.482 99.50000% : 26012.751us 00:07:56.482 99.90000% : 36296.862us 00:07:56.482 99.99000% : 37103.458us 00:07:56.482 99.99900% : 37103.458us 00:07:56.482 99.99990% : 37103.458us 00:07:56.482 99.99999% : 37103.458us 00:07:56.482 00:07:56.482 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:56.482 ================================================================================= 00:07:56.482 1.00000% : 5873.034us 00:07:56.482 10.00000% : 6024.271us 00:07:56.482 25.00000% : 6175.508us 00:07:56.482 50.00000% : 6452.775us 00:07:56.482 75.00000% : 6704.837us 00:07:56.482 90.00000% : 8065.969us 00:07:56.482 95.00000% : 9679.163us 00:07:56.482 98.00000% : 11040.295us 00:07:56.482 99.00000% : 13913.797us 00:07:56.482 99.50000% : 25407.803us 00:07:56.482 99.90000% : 34885.317us 00:07:56.482 99.99000% : 35288.615us 00:07:56.482 99.99900% : 35490.265us 00:07:56.482 99.99990% : 35490.265us 00:07:56.482 99.99999% : 35490.265us 00:07:56.482 00:07:56.482 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:56.482 ================================================================================= 00:07:56.482 1.00000% : 5847.828us 00:07:56.482 10.00000% : 6024.271us 00:07:56.482 25.00000% : 6175.508us 00:07:56.482 50.00000% : 6452.775us 00:07:56.482 75.00000% : 6704.837us 00:07:56.482 90.00000% : 7914.732us 00:07:56.482 95.00000% : 9729.575us 00:07:56.482 98.00000% : 11090.708us 00:07:56.482 99.00000% : 14216.271us 00:07:56.482 99.50000% : 24500.382us 00:07:56.482 99.90000% : 34683.668us 00:07:56.482 99.99000% : 35086.966us 00:07:56.482 99.99900% : 35086.966us 00:07:56.482 99.99990% : 35086.966us 00:07:56.482 99.99999% : 35086.966us 00:07:56.482 00:07:56.482 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:56.482 ================================================================================= 00:07:56.482 1.00000% : 5873.034us 00:07:56.482 10.00000% : 6024.271us 00:07:56.482 25.00000% : 6175.508us 00:07:56.482 50.00000% : 6452.775us 00:07:56.482 75.00000% : 6704.837us 00:07:56.482 90.00000% : 7763.495us 00:07:56.482 95.00000% : 9679.163us 00:07:56.482 98.00000% : 11241.945us 00:07:56.482 99.00000% : 13712.148us 00:07:56.482 99.50000% : 23693.785us 00:07:56.482 99.90000% : 33272.123us 00:07:56.482 99.99000% : 33675.422us 00:07:56.482 99.99900% : 33675.422us 00:07:56.482 99.99990% : 33675.422us 00:07:56.482 99.99999% : 33675.422us 00:07:56.482 00:07:56.482 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:56.482 ================================================================================= 00:07:56.482 1.00000% : 5873.034us 00:07:56.482 10.00000% : 6024.271us 00:07:56.482 25.00000% : 6175.508us 00:07:56.482 50.00000% : 6452.775us 00:07:56.482 75.00000% : 6704.837us 00:07:56.482 90.00000% : 7763.495us 00:07:56.483 95.00000% : 9527.926us 00:07:56.483 98.00000% : 11342.769us 00:07:56.483 99.00000% : 13107.200us 00:07:56.483 99.50000% : 22887.188us 00:07:56.483 99.90000% : 31860.578us 00:07:56.483 99.99000% : 32263.877us 00:07:56.483 99.99900% : 32465.526us 00:07:56.483 99.99990% : 32465.526us 00:07:56.483 99.99999% : 32465.526us 00:07:56.483 00:07:56.483 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:56.483 ================================================================================= 00:07:56.483 1.00000% : 5873.034us 00:07:56.483 10.00000% : 6024.271us 00:07:56.483 25.00000% : 6175.508us 00:07:56.483 50.00000% : 6452.775us 00:07:56.483 75.00000% : 6704.837us 00:07:56.483 90.00000% : 7914.732us 00:07:56.483 95.00000% : 9427.102us 00:07:56.483 98.00000% : 11141.120us 00:07:56.483 99.00000% : 12300.603us 00:07:56.483 99.50000% : 22080.591us 00:07:56.483 99.90000% : 30650.683us 00:07:56.483 99.99000% : 31053.982us 00:07:56.483 99.99900% : 31053.982us 00:07:56.483 99.99990% : 31053.982us 00:07:56.483 99.99999% : 31053.982us 00:07:56.483 00:07:56.483 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:56.483 ============================================================================== 00:07:56.483 Range in us Cumulative IO count 00:07:56.483 5620.972 - 5646.178: 0.0054% ( 1) 00:07:56.483 5646.178 - 5671.385: 0.0322% ( 5) 00:07:56.483 5671.385 - 5696.591: 0.1128% ( 15) 00:07:56.483 5696.591 - 5721.797: 0.2685% ( 29) 00:07:56.483 5721.797 - 5747.003: 0.5047% ( 44) 00:07:56.483 5747.003 - 5772.209: 0.9396% ( 81) 00:07:56.483 5772.209 - 5797.415: 1.6967% ( 141) 00:07:56.483 5797.415 - 5822.622: 2.7652% ( 199) 00:07:56.483 5822.622 - 5847.828: 3.9412% ( 219) 00:07:56.483 5847.828 - 5873.034: 5.2781% ( 249) 00:07:56.483 5873.034 - 5898.240: 6.8890% ( 300) 00:07:56.483 5898.240 - 5923.446: 8.5481% ( 309) 00:07:56.483 5923.446 - 5948.652: 10.1589% ( 300) 00:07:56.483 5948.652 - 5973.858: 11.9523% ( 334) 00:07:56.483 5973.858 - 5999.065: 13.8477% ( 353) 00:07:56.483 5999.065 - 6024.271: 15.7378% ( 352) 00:07:56.483 6024.271 - 6049.477: 17.7191% ( 369) 00:07:56.483 6049.477 - 6074.683: 19.5339% ( 338) 00:07:56.483 6074.683 - 6099.889: 21.6763% ( 399) 00:07:56.483 6099.889 - 6125.095: 23.5395% ( 347) 00:07:56.483 6125.095 - 6150.302: 25.6282% ( 389) 00:07:56.483 6150.302 - 6175.508: 27.5881% ( 365) 00:07:56.483 6175.508 - 6200.714: 29.6553% ( 385) 00:07:56.483 6200.714 - 6225.920: 31.7655% ( 393) 00:07:56.483 6225.920 - 6251.126: 33.8595% ( 390) 00:07:56.483 6251.126 - 6276.332: 35.8677% ( 374) 00:07:56.483 6276.332 - 6301.538: 37.9134% ( 381) 00:07:56.483 6301.538 - 6326.745: 39.9592% ( 381) 00:07:56.483 6326.745 - 6351.951: 42.0801% ( 395) 00:07:56.483 6351.951 - 6377.157: 44.1259% ( 381) 00:07:56.483 6377.157 - 6402.363: 46.2146% ( 389) 00:07:56.483 6402.363 - 6427.569: 48.2818% ( 385) 00:07:56.483 6427.569 - 6452.775: 50.3759% ( 390) 00:07:56.483 6452.775 - 6503.188: 54.6177% ( 790) 00:07:56.483 6503.188 - 6553.600: 58.7253% ( 765) 00:07:56.483 6553.600 - 6604.012: 63.0262% ( 801) 00:07:56.483 6604.012 - 6654.425: 67.2412% ( 785) 00:07:56.483 6654.425 - 6704.837: 71.3649% ( 768) 00:07:56.483 6704.837 - 6755.249: 75.2470% ( 723) 00:07:56.483 6755.249 - 6805.662: 78.5384% ( 613) 00:07:56.483 6805.662 - 6856.074: 80.8419% ( 429) 00:07:56.483 6856.074 - 6906.486: 82.6407% ( 335) 00:07:56.483 6906.486 - 6956.898: 83.8756% ( 230) 00:07:56.483 6956.898 - 7007.311: 84.7616% ( 165) 00:07:56.483 7007.311 - 7057.723: 85.3254% ( 105) 00:07:56.483 7057.723 - 7108.135: 85.8570% ( 99) 00:07:56.483 7108.135 - 7158.548: 86.2489% ( 73) 00:07:56.483 7158.548 - 7208.960: 86.6033% ( 66) 00:07:56.483 7208.960 - 7259.372: 86.8825% ( 52) 00:07:56.483 7259.372 - 7309.785: 87.1295% ( 46) 00:07:56.483 7309.785 - 7360.197: 87.4195% ( 54) 00:07:56.483 7360.197 - 7410.609: 87.6450% ( 42) 00:07:56.483 7410.609 - 7461.022: 87.8920% ( 46) 00:07:56.483 7461.022 - 7511.434: 88.1336% ( 45) 00:07:56.483 7511.434 - 7561.846: 88.3967% ( 49) 00:07:56.483 7561.846 - 7612.258: 88.6491% ( 47) 00:07:56.483 7612.258 - 7662.671: 88.8638% ( 40) 00:07:56.483 7662.671 - 7713.083: 89.0679% ( 38) 00:07:56.483 7713.083 - 7763.495: 89.2665% ( 37) 00:07:56.483 7763.495 - 7813.908: 89.4491% ( 34) 00:07:56.483 7813.908 - 7864.320: 89.5887% ( 26) 00:07:56.483 7864.320 - 7914.732: 89.7229% ( 25) 00:07:56.483 7914.732 - 7965.145: 89.8679% ( 27) 00:07:56.483 7965.145 - 8015.557: 90.0344% ( 31) 00:07:56.483 8015.557 - 8065.969: 90.2169% ( 34) 00:07:56.483 8065.969 - 8116.382: 90.3995% ( 34) 00:07:56.483 8116.382 - 8166.794: 90.5659% ( 31) 00:07:56.483 8166.794 - 8217.206: 90.7700% ( 38) 00:07:56.483 8217.206 - 8267.618: 90.9848% ( 40) 00:07:56.483 8267.618 - 8318.031: 91.1512% ( 31) 00:07:56.483 8318.031 - 8368.443: 91.3499% ( 37) 00:07:56.483 8368.443 - 8418.855: 91.5378% ( 35) 00:07:56.483 8418.855 - 8469.268: 91.7579% ( 41) 00:07:56.483 8469.268 - 8519.680: 91.9781% ( 41) 00:07:56.483 8519.680 - 8570.092: 92.1607% ( 34) 00:07:56.483 8570.092 - 8620.505: 92.3378% ( 33) 00:07:56.483 8620.505 - 8670.917: 92.5473% ( 39) 00:07:56.483 8670.917 - 8721.329: 92.6922% ( 27) 00:07:56.483 8721.329 - 8771.742: 92.8479% ( 29) 00:07:56.483 8771.742 - 8822.154: 93.0251% ( 33) 00:07:56.483 8822.154 - 8872.566: 93.1862% ( 30) 00:07:56.483 8872.566 - 8922.978: 93.3580% ( 32) 00:07:56.483 8922.978 - 8973.391: 93.5030% ( 27) 00:07:56.483 8973.391 - 9023.803: 93.6265% ( 23) 00:07:56.483 9023.803 - 9074.215: 93.7393% ( 21) 00:07:56.483 9074.215 - 9124.628: 93.8842% ( 27) 00:07:56.483 9124.628 - 9175.040: 94.0131% ( 24) 00:07:56.483 9175.040 - 9225.452: 94.1366% ( 23) 00:07:56.483 9225.452 - 9275.865: 94.2655% ( 24) 00:07:56.483 9275.865 - 9326.277: 94.3890% ( 23) 00:07:56.483 9326.277 - 9376.689: 94.5393% ( 28) 00:07:56.483 9376.689 - 9427.102: 94.6789% ( 26) 00:07:56.483 9427.102 - 9477.514: 94.7756% ( 18) 00:07:56.483 9477.514 - 9527.926: 94.8507% ( 14) 00:07:56.483 9527.926 - 9578.338: 94.9366% ( 16) 00:07:56.483 9578.338 - 9628.751: 95.0226% ( 16) 00:07:56.483 9628.751 - 9679.163: 95.1085% ( 16) 00:07:56.483 9679.163 - 9729.575: 95.2212% ( 21) 00:07:56.483 9729.575 - 9779.988: 95.3340% ( 21) 00:07:56.483 9779.988 - 9830.400: 95.4306% ( 18) 00:07:56.483 9830.400 - 9880.812: 95.5273% ( 18) 00:07:56.483 9880.812 - 9931.225: 95.6239% ( 18) 00:07:56.483 9931.225 - 9981.637: 95.7152% ( 17) 00:07:56.483 9981.637 - 10032.049: 95.7904% ( 14) 00:07:56.483 10032.049 - 10082.462: 95.9192% ( 24) 00:07:56.483 10082.462 - 10132.874: 96.0213% ( 19) 00:07:56.483 10132.874 - 10183.286: 96.1823% ( 30) 00:07:56.483 10183.286 - 10233.698: 96.3005% ( 22) 00:07:56.483 10233.698 - 10284.111: 96.4401% ( 26) 00:07:56.483 10284.111 - 10334.523: 96.5528% ( 21) 00:07:56.483 10334.523 - 10384.935: 96.6656% ( 21) 00:07:56.483 10384.935 - 10435.348: 96.7784% ( 21) 00:07:56.483 10435.348 - 10485.760: 96.9072% ( 24) 00:07:56.483 10485.760 - 10536.172: 97.0200% ( 21) 00:07:56.483 10536.172 - 10586.585: 97.1488% ( 24) 00:07:56.483 10586.585 - 10636.997: 97.2723% ( 23) 00:07:56.483 10636.997 - 10687.409: 97.3475% ( 14) 00:07:56.483 10687.409 - 10737.822: 97.4549% ( 20) 00:07:56.483 10737.822 - 10788.234: 97.5301% ( 14) 00:07:56.483 10788.234 - 10838.646: 97.6267% ( 18) 00:07:56.483 10838.646 - 10889.058: 97.6912% ( 12) 00:07:56.483 10889.058 - 10939.471: 97.7663% ( 14) 00:07:56.483 10939.471 - 10989.883: 97.8415% ( 14) 00:07:56.483 10989.883 - 11040.295: 97.9059% ( 12) 00:07:56.483 11040.295 - 11090.708: 97.9650% ( 11) 00:07:56.483 11090.708 - 11141.120: 98.0187% ( 10) 00:07:56.483 11141.120 - 11191.532: 98.0724% ( 10) 00:07:56.483 11191.532 - 11241.945: 98.1153% ( 8) 00:07:56.483 11241.945 - 11292.357: 98.1583% ( 8) 00:07:56.483 11292.357 - 11342.769: 98.2120% ( 10) 00:07:56.483 11342.769 - 11393.182: 98.2603% ( 9) 00:07:56.483 11393.182 - 11443.594: 98.3194% ( 11) 00:07:56.483 11443.594 - 11494.006: 98.3784% ( 11) 00:07:56.483 11494.006 - 11544.418: 98.4214% ( 8) 00:07:56.483 11544.418 - 11594.831: 98.4751% ( 10) 00:07:56.483 11594.831 - 11645.243: 98.4966% ( 4) 00:07:56.483 11645.243 - 11695.655: 98.5395% ( 8) 00:07:56.483 11695.655 - 11746.068: 98.5717% ( 6) 00:07:56.483 11746.068 - 11796.480: 98.6040% ( 6) 00:07:56.483 11796.480 - 11846.892: 98.6254% ( 4) 00:07:56.483 11846.892 - 11897.305: 98.6469% ( 4) 00:07:56.483 11897.305 - 11947.717: 98.6738% ( 5) 00:07:56.483 11947.717 - 11998.129: 98.7006% ( 5) 00:07:56.483 11998.129 - 12048.542: 98.7274% ( 5) 00:07:56.483 12048.542 - 12098.954: 98.7543% ( 5) 00:07:56.483 12098.954 - 12149.366: 98.7758% ( 4) 00:07:56.483 12149.366 - 12199.778: 98.8026% ( 5) 00:07:56.483 12199.778 - 12250.191: 98.8295% ( 5) 00:07:56.483 12250.191 - 12300.603: 98.8563% ( 5) 00:07:56.483 12300.603 - 12351.015: 98.8778% ( 4) 00:07:56.483 12351.015 - 12401.428: 98.8993% ( 4) 00:07:56.483 12401.428 - 12451.840: 98.9046% ( 1) 00:07:56.483 12451.840 - 12502.252: 98.9154% ( 2) 00:07:56.483 12502.252 - 12552.665: 98.9207% ( 1) 00:07:56.483 12552.665 - 12603.077: 98.9261% ( 1) 00:07:56.483 12603.077 - 12653.489: 98.9315% ( 1) 00:07:56.483 12653.489 - 12703.902: 98.9369% ( 1) 00:07:56.483 12703.902 - 12754.314: 98.9476% ( 2) 00:07:56.484 12754.314 - 12804.726: 98.9530% ( 1) 00:07:56.484 12804.726 - 12855.138: 98.9637% ( 2) 00:07:56.484 12855.138 - 12905.551: 98.9691% ( 1) 00:07:56.484 14317.095 - 14417.920: 98.9798% ( 2) 00:07:56.484 14417.920 - 14518.745: 99.0013% ( 4) 00:07:56.484 14518.745 - 14619.569: 99.0174% ( 3) 00:07:56.484 14619.569 - 14720.394: 99.0389% ( 4) 00:07:56.484 14720.394 - 14821.218: 99.0496% ( 2) 00:07:56.484 14821.218 - 14922.043: 99.0657% ( 3) 00:07:56.484 14922.043 - 15022.868: 99.0818% ( 3) 00:07:56.484 15022.868 - 15123.692: 99.0979% ( 3) 00:07:56.484 15123.692 - 15224.517: 99.1140% ( 3) 00:07:56.484 15224.517 - 15325.342: 99.1302% ( 3) 00:07:56.484 15325.342 - 15426.166: 99.1409% ( 2) 00:07:56.484 15426.166 - 15526.991: 99.1570% ( 3) 00:07:56.484 15526.991 - 15627.815: 99.1785% ( 4) 00:07:56.484 15627.815 - 15728.640: 99.2000% ( 4) 00:07:56.484 15728.640 - 15829.465: 99.2161% ( 3) 00:07:56.484 15829.465 - 15930.289: 99.2268% ( 2) 00:07:56.484 15930.289 - 16031.114: 99.2429% ( 3) 00:07:56.484 16031.114 - 16131.938: 99.2590% ( 3) 00:07:56.484 16131.938 - 16232.763: 99.2751% ( 3) 00:07:56.484 16232.763 - 16333.588: 99.2912% ( 3) 00:07:56.484 16333.588 - 16434.412: 99.3020% ( 2) 00:07:56.484 16434.412 - 16535.237: 99.3127% ( 2) 00:07:56.484 24298.732 - 24399.557: 99.3181% ( 1) 00:07:56.484 24399.557 - 24500.382: 99.3288% ( 2) 00:07:56.484 24500.382 - 24601.206: 99.3396% ( 2) 00:07:56.484 24601.206 - 24702.031: 99.3557% ( 3) 00:07:56.484 24702.031 - 24802.855: 99.3610% ( 1) 00:07:56.484 24802.855 - 24903.680: 99.3771% ( 3) 00:07:56.484 24903.680 - 25004.505: 99.3879% ( 2) 00:07:56.484 25004.505 - 25105.329: 99.4040% ( 3) 00:07:56.484 25105.329 - 25206.154: 99.4094% ( 1) 00:07:56.484 25206.154 - 25306.978: 99.4255% ( 3) 00:07:56.484 25306.978 - 25407.803: 99.4362% ( 2) 00:07:56.484 25407.803 - 25508.628: 99.4523% ( 3) 00:07:56.484 25508.628 - 25609.452: 99.4631% ( 2) 00:07:56.484 25609.452 - 25710.277: 99.4792% ( 3) 00:07:56.484 25710.277 - 25811.102: 99.4899% ( 2) 00:07:56.484 25811.102 - 26012.751: 99.5168% ( 5) 00:07:56.484 26012.751 - 26214.400: 99.5382% ( 4) 00:07:56.484 26214.400 - 26416.049: 99.5651% ( 5) 00:07:56.484 26416.049 - 26617.698: 99.5919% ( 5) 00:07:56.484 26617.698 - 26819.348: 99.6188% ( 5) 00:07:56.484 26819.348 - 27020.997: 99.6456% ( 5) 00:07:56.484 27020.997 - 27222.646: 99.6564% ( 2) 00:07:56.484 34078.720 - 34280.369: 99.6617% ( 1) 00:07:56.484 34280.369 - 34482.018: 99.6886% ( 5) 00:07:56.484 34482.018 - 34683.668: 99.7101% ( 4) 00:07:56.484 34683.668 - 34885.317: 99.7315% ( 4) 00:07:56.484 34885.317 - 35086.966: 99.7530% ( 4) 00:07:56.484 35086.966 - 35288.615: 99.7799% ( 5) 00:07:56.484 35288.615 - 35490.265: 99.8067% ( 5) 00:07:56.484 35490.265 - 35691.914: 99.8282% ( 4) 00:07:56.484 35691.914 - 35893.563: 99.8550% ( 5) 00:07:56.484 35893.563 - 36095.212: 99.8819% ( 5) 00:07:56.484 36095.212 - 36296.862: 99.9034% ( 4) 00:07:56.484 36296.862 - 36498.511: 99.9248% ( 4) 00:07:56.484 36498.511 - 36700.160: 99.9570% ( 6) 00:07:56.484 36700.160 - 36901.809: 99.9785% ( 4) 00:07:56.484 36901.809 - 37103.458: 100.0000% ( 4) 00:07:56.484 00:07:56.484 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:56.484 ============================================================================== 00:07:56.484 Range in us Cumulative IO count 00:07:56.484 5721.797 - 5747.003: 0.0107% ( 2) 00:07:56.484 5747.003 - 5772.209: 0.0752% ( 12) 00:07:56.484 5772.209 - 5797.415: 0.2309% ( 29) 00:07:56.484 5797.415 - 5822.622: 0.4403% ( 39) 00:07:56.484 5822.622 - 5847.828: 0.7463% ( 57) 00:07:56.484 5847.828 - 5873.034: 1.2887% ( 101) 00:07:56.484 5873.034 - 5898.240: 2.3625% ( 200) 00:07:56.484 5898.240 - 5923.446: 4.0056% ( 306) 00:07:56.484 5923.446 - 5948.652: 5.6808% ( 312) 00:07:56.484 5948.652 - 5973.858: 7.4742% ( 334) 00:07:56.484 5973.858 - 5999.065: 9.3804% ( 355) 00:07:56.484 5999.065 - 6024.271: 11.6033% ( 414) 00:07:56.484 6024.271 - 6049.477: 13.7027% ( 391) 00:07:56.484 6049.477 - 6074.683: 15.9848% ( 425) 00:07:56.484 6074.683 - 6099.889: 18.4708% ( 463) 00:07:56.484 6099.889 - 6125.095: 20.7313% ( 421) 00:07:56.484 6125.095 - 6150.302: 22.9274% ( 409) 00:07:56.484 6150.302 - 6175.508: 25.1611% ( 416) 00:07:56.484 6175.508 - 6200.714: 27.5558% ( 446) 00:07:56.484 6200.714 - 6225.920: 29.9828% ( 452) 00:07:56.484 6225.920 - 6251.126: 32.3239% ( 436) 00:07:56.484 6251.126 - 6276.332: 34.6274% ( 429) 00:07:56.484 6276.332 - 6301.538: 37.0597% ( 453) 00:07:56.484 6301.538 - 6326.745: 39.4921% ( 453) 00:07:56.484 6326.745 - 6351.951: 41.9835% ( 464) 00:07:56.484 6351.951 - 6377.157: 44.4856% ( 466) 00:07:56.484 6377.157 - 6402.363: 46.9663% ( 462) 00:07:56.484 6402.363 - 6427.569: 49.4738% ( 467) 00:07:56.484 6427.569 - 6452.775: 52.0028% ( 471) 00:07:56.484 6452.775 - 6503.188: 56.9104% ( 914) 00:07:56.484 6503.188 - 6553.600: 61.9094% ( 931) 00:07:56.484 6553.600 - 6604.012: 66.8761% ( 925) 00:07:56.484 6604.012 - 6654.425: 71.5743% ( 875) 00:07:56.484 6654.425 - 6704.837: 75.8054% ( 788) 00:07:56.484 6704.837 - 6755.249: 78.8660% ( 570) 00:07:56.484 6755.249 - 6805.662: 81.0997% ( 416) 00:07:56.484 6805.662 - 6856.074: 82.7105% ( 300) 00:07:56.484 6856.074 - 6906.486: 83.8005% ( 203) 00:07:56.484 6906.486 - 6956.898: 84.5737% ( 144) 00:07:56.484 6956.898 - 7007.311: 85.2126% ( 119) 00:07:56.484 7007.311 - 7057.723: 85.6583% ( 83) 00:07:56.484 7057.723 - 7108.135: 86.0556% ( 74) 00:07:56.484 7108.135 - 7158.548: 86.3563% ( 56) 00:07:56.484 7158.548 - 7208.960: 86.6570% ( 56) 00:07:56.484 7208.960 - 7259.372: 86.9147% ( 48) 00:07:56.484 7259.372 - 7309.785: 87.1510% ( 44) 00:07:56.484 7309.785 - 7360.197: 87.4087% ( 48) 00:07:56.484 7360.197 - 7410.609: 87.6826% ( 51) 00:07:56.484 7410.609 - 7461.022: 87.9994% ( 59) 00:07:56.484 7461.022 - 7511.434: 88.2356% ( 44) 00:07:56.484 7511.434 - 7561.846: 88.4128% ( 33) 00:07:56.484 7561.846 - 7612.258: 88.6061% ( 36) 00:07:56.484 7612.258 - 7662.671: 88.7887% ( 34) 00:07:56.484 7662.671 - 7713.083: 88.9444% ( 29) 00:07:56.484 7713.083 - 7763.495: 89.1001% ( 29) 00:07:56.484 7763.495 - 7813.908: 89.2451% ( 27) 00:07:56.484 7813.908 - 7864.320: 89.3847% ( 26) 00:07:56.484 7864.320 - 7914.732: 89.5672% ( 34) 00:07:56.484 7914.732 - 7965.145: 89.7444% ( 33) 00:07:56.484 7965.145 - 8015.557: 89.9162% ( 32) 00:07:56.484 8015.557 - 8065.969: 90.1095% ( 36) 00:07:56.484 8065.969 - 8116.382: 90.3136% ( 38) 00:07:56.484 8116.382 - 8166.794: 90.4961% ( 34) 00:07:56.484 8166.794 - 8217.206: 90.7592% ( 49) 00:07:56.484 8217.206 - 8267.618: 90.9579% ( 37) 00:07:56.484 8267.618 - 8318.031: 91.1190% ( 30) 00:07:56.484 8318.031 - 8368.443: 91.2854% ( 31) 00:07:56.484 8368.443 - 8418.855: 91.4358% ( 28) 00:07:56.484 8418.855 - 8469.268: 91.5808% ( 27) 00:07:56.484 8469.268 - 8519.680: 91.7579% ( 33) 00:07:56.484 8519.680 - 8570.092: 91.9888% ( 43) 00:07:56.484 8570.092 - 8620.505: 92.1875% ( 37) 00:07:56.484 8620.505 - 8670.917: 92.4345% ( 46) 00:07:56.484 8670.917 - 8721.329: 92.6171% ( 34) 00:07:56.484 8721.329 - 8771.742: 92.8157% ( 37) 00:07:56.484 8771.742 - 8822.154: 92.9983% ( 34) 00:07:56.484 8822.154 - 8872.566: 93.1916% ( 36) 00:07:56.484 8872.566 - 8922.978: 93.3473% ( 29) 00:07:56.484 8922.978 - 8973.391: 93.4923% ( 27) 00:07:56.484 8973.391 - 9023.803: 93.6372% ( 27) 00:07:56.484 9023.803 - 9074.215: 93.7715% ( 25) 00:07:56.484 9074.215 - 9124.628: 93.9218% ( 28) 00:07:56.484 9124.628 - 9175.040: 94.0614% ( 26) 00:07:56.484 9175.040 - 9225.452: 94.1957% ( 25) 00:07:56.484 9225.452 - 9275.865: 94.3084% ( 21) 00:07:56.484 9275.865 - 9326.277: 94.4265% ( 22) 00:07:56.484 9326.277 - 9376.689: 94.5339% ( 20) 00:07:56.484 9376.689 - 9427.102: 94.6252% ( 17) 00:07:56.484 9427.102 - 9477.514: 94.7004% ( 14) 00:07:56.484 9477.514 - 9527.926: 94.7970% ( 18) 00:07:56.484 9527.926 - 9578.338: 94.8991% ( 19) 00:07:56.484 9578.338 - 9628.751: 94.9957% ( 18) 00:07:56.484 9628.751 - 9679.163: 95.1085% ( 21) 00:07:56.484 9679.163 - 9729.575: 95.1997% ( 17) 00:07:56.484 9729.575 - 9779.988: 95.3501% ( 28) 00:07:56.484 9779.988 - 9830.400: 95.4414% ( 17) 00:07:56.484 9830.400 - 9880.812: 95.5165% ( 14) 00:07:56.484 9880.812 - 9931.225: 95.6078% ( 17) 00:07:56.484 9931.225 - 9981.637: 95.6937% ( 16) 00:07:56.484 9981.637 - 10032.049: 95.8065% ( 21) 00:07:56.484 10032.049 - 10082.462: 95.9300% ( 23) 00:07:56.484 10082.462 - 10132.874: 96.0320% ( 19) 00:07:56.484 10132.874 - 10183.286: 96.1662% ( 25) 00:07:56.484 10183.286 - 10233.698: 96.3005% ( 25) 00:07:56.484 10233.698 - 10284.111: 96.4240% ( 23) 00:07:56.484 10284.111 - 10334.523: 96.5421% ( 22) 00:07:56.484 10334.523 - 10384.935: 96.6602% ( 22) 00:07:56.484 10384.935 - 10435.348: 96.7784% ( 22) 00:07:56.484 10435.348 - 10485.760: 96.8911% ( 21) 00:07:56.484 10485.760 - 10536.172: 97.0146% ( 23) 00:07:56.484 10536.172 - 10586.585: 97.1220% ( 20) 00:07:56.484 10586.585 - 10636.997: 97.2240% ( 19) 00:07:56.484 10636.997 - 10687.409: 97.3207% ( 18) 00:07:56.484 10687.409 - 10737.822: 97.4388% ( 22) 00:07:56.484 10737.822 - 10788.234: 97.5838% ( 27) 00:07:56.484 10788.234 - 10838.646: 97.6965% ( 21) 00:07:56.484 10838.646 - 10889.058: 97.8093% ( 21) 00:07:56.484 10889.058 - 10939.471: 97.9059% ( 18) 00:07:56.484 10939.471 - 10989.883: 97.9972% ( 17) 00:07:56.485 10989.883 - 11040.295: 98.0777% ( 15) 00:07:56.485 11040.295 - 11090.708: 98.1476% ( 13) 00:07:56.485 11090.708 - 11141.120: 98.2120% ( 12) 00:07:56.485 11141.120 - 11191.532: 98.2603% ( 9) 00:07:56.485 11191.532 - 11241.945: 98.2979% ( 7) 00:07:56.485 11241.945 - 11292.357: 98.3516% ( 10) 00:07:56.485 11292.357 - 11342.769: 98.3999% ( 9) 00:07:56.485 11342.769 - 11393.182: 98.4536% ( 10) 00:07:56.485 11393.182 - 11443.594: 98.4966% ( 8) 00:07:56.485 11443.594 - 11494.006: 98.5341% ( 7) 00:07:56.485 11494.006 - 11544.418: 98.6040% ( 13) 00:07:56.485 11544.418 - 11594.831: 98.6738% ( 13) 00:07:56.485 11594.831 - 11645.243: 98.7006% ( 5) 00:07:56.485 11645.243 - 11695.655: 98.7221% ( 4) 00:07:56.485 11695.655 - 11746.068: 98.7436% ( 4) 00:07:56.485 11746.068 - 11796.480: 98.7650% ( 4) 00:07:56.485 11796.480 - 11846.892: 98.7811% ( 3) 00:07:56.485 11846.892 - 11897.305: 98.8026% ( 4) 00:07:56.485 11897.305 - 11947.717: 98.8187% ( 3) 00:07:56.485 11947.717 - 11998.129: 98.8402% ( 4) 00:07:56.485 11998.129 - 12048.542: 98.8563% ( 3) 00:07:56.485 12048.542 - 12098.954: 98.8778% ( 4) 00:07:56.485 12098.954 - 12149.366: 98.8993% ( 4) 00:07:56.485 12149.366 - 12199.778: 98.9154% ( 3) 00:07:56.485 12199.778 - 12250.191: 98.9369% ( 4) 00:07:56.485 12250.191 - 12300.603: 98.9583% ( 4) 00:07:56.485 12300.603 - 12351.015: 98.9691% ( 2) 00:07:56.485 13712.148 - 13812.972: 98.9852% ( 3) 00:07:56.485 13812.972 - 13913.797: 99.0067% ( 4) 00:07:56.485 13913.797 - 14014.622: 99.0228% ( 3) 00:07:56.485 14014.622 - 14115.446: 99.0442% ( 4) 00:07:56.485 14115.446 - 14216.271: 99.0657% ( 4) 00:07:56.485 14216.271 - 14317.095: 99.0872% ( 4) 00:07:56.485 14317.095 - 14417.920: 99.1087% ( 4) 00:07:56.485 14417.920 - 14518.745: 99.1248% ( 3) 00:07:56.485 14518.745 - 14619.569: 99.1463% ( 4) 00:07:56.485 14619.569 - 14720.394: 99.1677% ( 4) 00:07:56.485 14720.394 - 14821.218: 99.1838% ( 3) 00:07:56.485 14821.218 - 14922.043: 99.2053% ( 4) 00:07:56.485 14922.043 - 15022.868: 99.2214% ( 3) 00:07:56.485 15022.868 - 15123.692: 99.2375% ( 3) 00:07:56.485 15123.692 - 15224.517: 99.2590% ( 4) 00:07:56.485 15224.517 - 15325.342: 99.2698% ( 2) 00:07:56.485 15325.342 - 15426.166: 99.2912% ( 4) 00:07:56.485 15426.166 - 15526.991: 99.3073% ( 3) 00:07:56.485 15526.991 - 15627.815: 99.3127% ( 1) 00:07:56.485 23895.434 - 23996.258: 99.3235% ( 2) 00:07:56.485 23996.258 - 24097.083: 99.3342% ( 2) 00:07:56.485 24097.083 - 24197.908: 99.3503% ( 3) 00:07:56.485 24197.908 - 24298.732: 99.3557% ( 1) 00:07:56.485 24298.732 - 24399.557: 99.3718% ( 3) 00:07:56.485 24399.557 - 24500.382: 99.3825% ( 2) 00:07:56.485 24500.382 - 24601.206: 99.3986% ( 3) 00:07:56.485 24601.206 - 24702.031: 99.4094% ( 2) 00:07:56.485 24702.031 - 24802.855: 99.4201% ( 2) 00:07:56.485 24802.855 - 24903.680: 99.4362% ( 3) 00:07:56.485 24903.680 - 25004.505: 99.4523% ( 3) 00:07:56.485 25004.505 - 25105.329: 99.4684% ( 3) 00:07:56.485 25105.329 - 25206.154: 99.4845% ( 3) 00:07:56.485 25206.154 - 25306.978: 99.4953% ( 2) 00:07:56.485 25306.978 - 25407.803: 99.5114% ( 3) 00:07:56.485 25407.803 - 25508.628: 99.5221% ( 2) 00:07:56.485 25508.628 - 25609.452: 99.5382% ( 3) 00:07:56.485 25609.452 - 25710.277: 99.5543% ( 3) 00:07:56.485 25710.277 - 25811.102: 99.5704% ( 3) 00:07:56.485 25811.102 - 26012.751: 99.5973% ( 5) 00:07:56.485 26012.751 - 26214.400: 99.6295% ( 6) 00:07:56.485 26214.400 - 26416.049: 99.6564% ( 5) 00:07:56.485 33675.422 - 33877.071: 99.6725% ( 3) 00:07:56.485 33877.071 - 34078.720: 99.7208% ( 9) 00:07:56.485 34078.720 - 34280.369: 99.7637% ( 8) 00:07:56.485 34280.369 - 34482.018: 99.8121% ( 9) 00:07:56.485 34482.018 - 34683.668: 99.8550% ( 8) 00:07:56.485 34683.668 - 34885.317: 99.9034% ( 9) 00:07:56.485 34885.317 - 35086.966: 99.9463% ( 8) 00:07:56.485 35086.966 - 35288.615: 99.9946% ( 9) 00:07:56.485 35288.615 - 35490.265: 100.0000% ( 1) 00:07:56.485 00:07:56.485 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:56.485 ============================================================================== 00:07:56.485 Range in us Cumulative IO count 00:07:56.485 5721.797 - 5747.003: 0.0161% ( 3) 00:07:56.485 5747.003 - 5772.209: 0.0644% ( 9) 00:07:56.485 5772.209 - 5797.415: 0.1879% ( 23) 00:07:56.485 5797.415 - 5822.622: 0.4994% ( 58) 00:07:56.485 5822.622 - 5847.828: 1.2350% ( 137) 00:07:56.485 5847.828 - 5873.034: 2.0082% ( 144) 00:07:56.485 5873.034 - 5898.240: 2.9371% ( 173) 00:07:56.485 5898.240 - 5923.446: 4.3707% ( 267) 00:07:56.485 5923.446 - 5948.652: 5.9815% ( 300) 00:07:56.485 5948.652 - 5973.858: 7.7803% ( 335) 00:07:56.485 5973.858 - 5999.065: 9.7133% ( 360) 00:07:56.485 5999.065 - 6024.271: 11.8073% ( 390) 00:07:56.485 6024.271 - 6049.477: 14.0034% ( 409) 00:07:56.485 6049.477 - 6074.683: 16.2586% ( 420) 00:07:56.485 6074.683 - 6099.889: 18.6211% ( 440) 00:07:56.485 6099.889 - 6125.095: 20.8172% ( 409) 00:07:56.485 6125.095 - 6150.302: 23.1476% ( 434) 00:07:56.485 6150.302 - 6175.508: 25.4349% ( 426) 00:07:56.485 6175.508 - 6200.714: 27.8673% ( 453) 00:07:56.485 6200.714 - 6225.920: 30.3533% ( 463) 00:07:56.485 6225.920 - 6251.126: 32.8018% ( 456) 00:07:56.485 6251.126 - 6276.332: 35.2395% ( 454) 00:07:56.485 6276.332 - 6301.538: 37.6396% ( 447) 00:07:56.485 6301.538 - 6326.745: 40.0881% ( 456) 00:07:56.485 6326.745 - 6351.951: 42.5473% ( 458) 00:07:56.485 6351.951 - 6377.157: 45.0118% ( 459) 00:07:56.485 6377.157 - 6402.363: 47.4710% ( 458) 00:07:56.485 6402.363 - 6427.569: 49.9248% ( 457) 00:07:56.485 6427.569 - 6452.775: 52.4162% ( 464) 00:07:56.485 6452.775 - 6503.188: 57.3454% ( 918) 00:07:56.485 6503.188 - 6553.600: 62.2906% ( 921) 00:07:56.485 6553.600 - 6604.012: 67.1553% ( 906) 00:07:56.485 6604.012 - 6654.425: 71.8857% ( 881) 00:07:56.485 6654.425 - 6704.837: 75.8537% ( 739) 00:07:56.485 6704.837 - 6755.249: 79.0593% ( 597) 00:07:56.485 6755.249 - 6805.662: 81.3896% ( 434) 00:07:56.485 6805.662 - 6856.074: 82.9790% ( 296) 00:07:56.485 6856.074 - 6906.486: 83.9723% ( 185) 00:07:56.485 6906.486 - 6956.898: 84.7401% ( 143) 00:07:56.485 6956.898 - 7007.311: 85.3039% ( 105) 00:07:56.485 7007.311 - 7057.723: 85.8409% ( 100) 00:07:56.485 7057.723 - 7108.135: 86.2704% ( 80) 00:07:56.485 7108.135 - 7158.548: 86.6248% ( 66) 00:07:56.485 7158.548 - 7208.960: 86.9577% ( 62) 00:07:56.485 7208.960 - 7259.372: 87.2799% ( 60) 00:07:56.485 7259.372 - 7309.785: 87.5805% ( 56) 00:07:56.485 7309.785 - 7360.197: 87.8812% ( 56) 00:07:56.485 7360.197 - 7410.609: 88.1551% ( 51) 00:07:56.485 7410.609 - 7461.022: 88.4128% ( 48) 00:07:56.485 7461.022 - 7511.434: 88.6544% ( 45) 00:07:56.485 7511.434 - 7561.846: 88.9175% ( 49) 00:07:56.485 7561.846 - 7612.258: 89.1538% ( 44) 00:07:56.485 7612.258 - 7662.671: 89.3149% ( 30) 00:07:56.485 7662.671 - 7713.083: 89.4867% ( 32) 00:07:56.485 7713.083 - 7763.495: 89.6263% ( 26) 00:07:56.485 7763.495 - 7813.908: 89.7874% ( 30) 00:07:56.485 7813.908 - 7864.320: 89.9323% ( 27) 00:07:56.485 7864.320 - 7914.732: 90.0612% ( 24) 00:07:56.485 7914.732 - 7965.145: 90.1901% ( 24) 00:07:56.485 7965.145 - 8015.557: 90.3351% ( 27) 00:07:56.485 8015.557 - 8065.969: 90.4800% ( 27) 00:07:56.485 8065.969 - 8116.382: 90.6250% ( 27) 00:07:56.485 8116.382 - 8166.794: 90.7807% ( 29) 00:07:56.485 8166.794 - 8217.206: 90.9525% ( 32) 00:07:56.485 8217.206 - 8267.618: 91.1244% ( 32) 00:07:56.485 8267.618 - 8318.031: 91.2908% ( 31) 00:07:56.485 8318.031 - 8368.443: 91.4412% ( 28) 00:07:56.485 8368.443 - 8418.855: 91.5861% ( 27) 00:07:56.485 8418.855 - 8469.268: 91.7257% ( 26) 00:07:56.485 8469.268 - 8519.680: 91.8600% ( 25) 00:07:56.485 8519.680 - 8570.092: 91.9942% ( 25) 00:07:56.485 8570.092 - 8620.505: 92.1123% ( 22) 00:07:56.485 8620.505 - 8670.917: 92.2466% ( 25) 00:07:56.485 8670.917 - 8721.329: 92.3915% ( 27) 00:07:56.485 8721.329 - 8771.742: 92.5473% ( 29) 00:07:56.485 8771.742 - 8822.154: 92.6976% ( 28) 00:07:56.485 8822.154 - 8872.566: 92.8533% ( 29) 00:07:56.485 8872.566 - 8922.978: 92.9983% ( 27) 00:07:56.485 8922.978 - 8973.391: 93.1325% ( 25) 00:07:56.485 8973.391 - 9023.803: 93.2829% ( 28) 00:07:56.485 9023.803 - 9074.215: 93.4386% ( 29) 00:07:56.485 9074.215 - 9124.628: 93.5889% ( 28) 00:07:56.485 9124.628 - 9175.040: 93.7124% ( 23) 00:07:56.485 9175.040 - 9225.452: 93.8144% ( 19) 00:07:56.485 9225.452 - 9275.865: 93.9326% ( 22) 00:07:56.485 9275.865 - 9326.277: 94.0399% ( 20) 00:07:56.485 9326.277 - 9376.689: 94.1527% ( 21) 00:07:56.485 9376.689 - 9427.102: 94.2708% ( 22) 00:07:56.485 9427.102 - 9477.514: 94.4104% ( 26) 00:07:56.485 9477.514 - 9527.926: 94.5393% ( 24) 00:07:56.485 9527.926 - 9578.338: 94.6789% ( 26) 00:07:56.485 9578.338 - 9628.751: 94.7917% ( 21) 00:07:56.485 9628.751 - 9679.163: 94.8991% ( 20) 00:07:56.485 9679.163 - 9729.575: 95.0655% ( 31) 00:07:56.485 9729.575 - 9779.988: 95.2320% ( 31) 00:07:56.485 9779.988 - 9830.400: 95.3769% ( 27) 00:07:56.485 9830.400 - 9880.812: 95.5541% ( 33) 00:07:56.485 9880.812 - 9931.225: 95.7259% ( 32) 00:07:56.485 9931.225 - 9981.637: 95.8817% ( 29) 00:07:56.486 9981.637 - 10032.049: 96.0588% ( 33) 00:07:56.486 10032.049 - 10082.462: 96.2092% ( 28) 00:07:56.486 10082.462 - 10132.874: 96.3649% ( 29) 00:07:56.486 10132.874 - 10183.286: 96.5152% ( 28) 00:07:56.486 10183.286 - 10233.698: 96.6656% ( 28) 00:07:56.486 10233.698 - 10284.111: 96.7945% ( 24) 00:07:56.486 10284.111 - 10334.523: 96.9394% ( 27) 00:07:56.486 10334.523 - 10384.935: 97.0737% ( 25) 00:07:56.486 10384.935 - 10435.348: 97.1757% ( 19) 00:07:56.486 10435.348 - 10485.760: 97.2831% ( 20) 00:07:56.486 10485.760 - 10536.172: 97.3851% ( 19) 00:07:56.486 10536.172 - 10586.585: 97.5032% ( 22) 00:07:56.486 10586.585 - 10636.997: 97.5730% ( 13) 00:07:56.486 10636.997 - 10687.409: 97.6375% ( 12) 00:07:56.486 10687.409 - 10737.822: 97.7126% ( 14) 00:07:56.486 10737.822 - 10788.234: 97.7502% ( 7) 00:07:56.486 10788.234 - 10838.646: 97.7932% ( 8) 00:07:56.486 10838.646 - 10889.058: 97.8522% ( 11) 00:07:56.486 10889.058 - 10939.471: 97.8898% ( 7) 00:07:56.486 10939.471 - 10989.883: 97.9328% ( 8) 00:07:56.486 10989.883 - 11040.295: 97.9704% ( 7) 00:07:56.486 11040.295 - 11090.708: 98.0187% ( 9) 00:07:56.486 11090.708 - 11141.120: 98.0563% ( 7) 00:07:56.486 11141.120 - 11191.532: 98.1046% ( 9) 00:07:56.486 11191.532 - 11241.945: 98.1368% ( 6) 00:07:56.486 11241.945 - 11292.357: 98.1851% ( 9) 00:07:56.486 11292.357 - 11342.769: 98.2335% ( 9) 00:07:56.486 11342.769 - 11393.182: 98.2710% ( 7) 00:07:56.486 11393.182 - 11443.594: 98.3140% ( 8) 00:07:56.486 11443.594 - 11494.006: 98.3623% ( 9) 00:07:56.486 11494.006 - 11544.418: 98.4160% ( 10) 00:07:56.486 11544.418 - 11594.831: 98.4858% ( 13) 00:07:56.486 11594.831 - 11645.243: 98.5288% ( 8) 00:07:56.486 11645.243 - 11695.655: 98.5664% ( 7) 00:07:56.486 11695.655 - 11746.068: 98.5986% ( 6) 00:07:56.486 11746.068 - 11796.480: 98.6308% ( 6) 00:07:56.486 11796.480 - 11846.892: 98.6738% ( 8) 00:07:56.486 11846.892 - 11897.305: 98.7060% ( 6) 00:07:56.486 11897.305 - 11947.717: 98.7489% ( 8) 00:07:56.486 11947.717 - 11998.129: 98.7865% ( 7) 00:07:56.486 11998.129 - 12048.542: 98.8241% ( 7) 00:07:56.486 12048.542 - 12098.954: 98.8563% ( 6) 00:07:56.486 12098.954 - 12149.366: 98.8724% ( 3) 00:07:56.486 12149.366 - 12199.778: 98.8939% ( 4) 00:07:56.486 12199.778 - 12250.191: 98.9154% ( 4) 00:07:56.486 12250.191 - 12300.603: 98.9315% ( 3) 00:07:56.486 12300.603 - 12351.015: 98.9530% ( 4) 00:07:56.486 12351.015 - 12401.428: 98.9691% ( 3) 00:07:56.486 14014.622 - 14115.446: 98.9852% ( 3) 00:07:56.486 14115.446 - 14216.271: 99.0013% ( 3) 00:07:56.486 14216.271 - 14317.095: 99.0120% ( 2) 00:07:56.486 14317.095 - 14417.920: 99.0389% ( 5) 00:07:56.486 14417.920 - 14518.745: 99.0550% ( 3) 00:07:56.486 14518.745 - 14619.569: 99.0765% ( 4) 00:07:56.486 14619.569 - 14720.394: 99.0979% ( 4) 00:07:56.486 14720.394 - 14821.218: 99.1194% ( 4) 00:07:56.486 14821.218 - 14922.043: 99.1355% ( 3) 00:07:56.486 14922.043 - 15022.868: 99.1570% ( 4) 00:07:56.486 15022.868 - 15123.692: 99.1731% ( 3) 00:07:56.486 15123.692 - 15224.517: 99.1946% ( 4) 00:07:56.486 15224.517 - 15325.342: 99.2161% ( 4) 00:07:56.486 15325.342 - 15426.166: 99.2322% ( 3) 00:07:56.486 15426.166 - 15526.991: 99.2537% ( 4) 00:07:56.486 15526.991 - 15627.815: 99.2751% ( 4) 00:07:56.486 15627.815 - 15728.640: 99.2966% ( 4) 00:07:56.486 15728.640 - 15829.465: 99.3073% ( 2) 00:07:56.486 15829.465 - 15930.289: 99.3127% ( 1) 00:07:56.486 23088.837 - 23189.662: 99.3235% ( 2) 00:07:56.486 23189.662 - 23290.486: 99.3342% ( 2) 00:07:56.486 23290.486 - 23391.311: 99.3449% ( 2) 00:07:56.486 23391.311 - 23492.135: 99.3610% ( 3) 00:07:56.486 23492.135 - 23592.960: 99.3718% ( 2) 00:07:56.486 23592.960 - 23693.785: 99.3879% ( 3) 00:07:56.486 23693.785 - 23794.609: 99.4040% ( 3) 00:07:56.486 23794.609 - 23895.434: 99.4147% ( 2) 00:07:56.486 23895.434 - 23996.258: 99.4308% ( 3) 00:07:56.486 23996.258 - 24097.083: 99.4470% ( 3) 00:07:56.486 24097.083 - 24197.908: 99.4631% ( 3) 00:07:56.486 24197.908 - 24298.732: 99.4738% ( 2) 00:07:56.486 24298.732 - 24399.557: 99.4899% ( 3) 00:07:56.486 24399.557 - 24500.382: 99.5060% ( 3) 00:07:56.486 24500.382 - 24601.206: 99.5168% ( 2) 00:07:56.486 24601.206 - 24702.031: 99.5329% ( 3) 00:07:56.486 24702.031 - 24802.855: 99.5490% ( 3) 00:07:56.486 24802.855 - 24903.680: 99.5651% ( 3) 00:07:56.486 24903.680 - 25004.505: 99.5758% ( 2) 00:07:56.486 25004.505 - 25105.329: 99.5866% ( 2) 00:07:56.486 25105.329 - 25206.154: 99.6027% ( 3) 00:07:56.486 25206.154 - 25306.978: 99.6134% ( 2) 00:07:56.486 25306.978 - 25407.803: 99.6295% ( 3) 00:07:56.486 25407.803 - 25508.628: 99.6456% ( 3) 00:07:56.486 25508.628 - 25609.452: 99.6564% ( 2) 00:07:56.486 33272.123 - 33473.772: 99.6832% ( 5) 00:07:56.486 33473.772 - 33675.422: 99.7262% ( 8) 00:07:56.486 33675.422 - 33877.071: 99.7637% ( 7) 00:07:56.486 33877.071 - 34078.720: 99.8121% ( 9) 00:07:56.486 34078.720 - 34280.369: 99.8550% ( 8) 00:07:56.486 34280.369 - 34482.018: 99.8980% ( 8) 00:07:56.486 34482.018 - 34683.668: 99.9463% ( 9) 00:07:56.486 34683.668 - 34885.317: 99.9893% ( 8) 00:07:56.486 34885.317 - 35086.966: 100.0000% ( 2) 00:07:56.486 00:07:56.486 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:56.486 ============================================================================== 00:07:56.486 Range in us Cumulative IO count 00:07:56.486 5721.797 - 5747.003: 0.0107% ( 2) 00:07:56.486 5747.003 - 5772.209: 0.0537% ( 8) 00:07:56.486 5772.209 - 5797.415: 0.2201% ( 31) 00:07:56.486 5797.415 - 5822.622: 0.4832% ( 49) 00:07:56.486 5822.622 - 5847.828: 0.9665% ( 90) 00:07:56.486 5847.828 - 5873.034: 1.6914% ( 135) 00:07:56.486 5873.034 - 5898.240: 2.6418% ( 177) 00:07:56.486 5898.240 - 5923.446: 4.0217% ( 257) 00:07:56.486 5923.446 - 5948.652: 5.4661% ( 269) 00:07:56.486 5948.652 - 5973.858: 7.3507% ( 351) 00:07:56.486 5973.858 - 5999.065: 9.0743% ( 321) 00:07:56.486 5999.065 - 6024.271: 11.1845% ( 393) 00:07:56.486 6024.271 - 6049.477: 13.3913% ( 411) 00:07:56.486 6049.477 - 6074.683: 15.7163% ( 433) 00:07:56.486 6074.683 - 6099.889: 17.9983% ( 425) 00:07:56.486 6099.889 - 6125.095: 20.4253% ( 452) 00:07:56.486 6125.095 - 6150.302: 22.6965% ( 423) 00:07:56.486 6150.302 - 6175.508: 25.0054% ( 430) 00:07:56.486 6175.508 - 6200.714: 27.3948% ( 445) 00:07:56.486 6200.714 - 6225.920: 29.8540% ( 458) 00:07:56.486 6225.920 - 6251.126: 32.3615% ( 467) 00:07:56.486 6251.126 - 6276.332: 34.8153% ( 457) 00:07:56.486 6276.332 - 6301.538: 37.3013% ( 463) 00:07:56.486 6301.538 - 6326.745: 39.7874% ( 463) 00:07:56.486 6326.745 - 6351.951: 42.2788% ( 464) 00:07:56.486 6351.951 - 6377.157: 44.8454% ( 478) 00:07:56.486 6377.157 - 6402.363: 47.3207% ( 461) 00:07:56.486 6402.363 - 6427.569: 49.7584% ( 454) 00:07:56.486 6427.569 - 6452.775: 52.2498% ( 464) 00:07:56.486 6452.775 - 6503.188: 57.2165% ( 925) 00:07:56.486 6503.188 - 6553.600: 62.3389% ( 954) 00:07:56.486 6553.600 - 6604.012: 67.3003% ( 924) 00:07:56.486 6604.012 - 6654.425: 71.9824% ( 872) 00:07:56.486 6654.425 - 6704.837: 76.0417% ( 756) 00:07:56.486 6704.837 - 6755.249: 79.3653% ( 619) 00:07:56.486 6755.249 - 6805.662: 81.7386% ( 442) 00:07:56.486 6805.662 - 6856.074: 83.2796% ( 287) 00:07:56.486 6856.074 - 6906.486: 84.2945% ( 189) 00:07:56.486 6906.486 - 6956.898: 85.0569% ( 142) 00:07:56.486 6956.898 - 7007.311: 85.7012% ( 120) 00:07:56.486 7007.311 - 7057.723: 86.2650% ( 105) 00:07:56.486 7057.723 - 7108.135: 86.7214% ( 85) 00:07:56.486 7108.135 - 7158.548: 87.1027% ( 71) 00:07:56.486 7158.548 - 7208.960: 87.4624% ( 67) 00:07:56.486 7208.960 - 7259.372: 87.7846% ( 60) 00:07:56.486 7259.372 - 7309.785: 88.0906% ( 57) 00:07:56.486 7309.785 - 7360.197: 88.3860% ( 55) 00:07:56.486 7360.197 - 7410.609: 88.6813% ( 55) 00:07:56.486 7410.609 - 7461.022: 88.9336% ( 47) 00:07:56.486 7461.022 - 7511.434: 89.1806% ( 46) 00:07:56.486 7511.434 - 7561.846: 89.3900% ( 39) 00:07:56.486 7561.846 - 7612.258: 89.5726% ( 34) 00:07:56.486 7612.258 - 7662.671: 89.7444% ( 32) 00:07:56.486 7662.671 - 7713.083: 89.9270% ( 34) 00:07:56.486 7713.083 - 7763.495: 90.0451% ( 22) 00:07:56.486 7763.495 - 7813.908: 90.1310% ( 16) 00:07:56.486 7813.908 - 7864.320: 90.2116% ( 15) 00:07:56.486 7864.320 - 7914.732: 90.3082% ( 18) 00:07:56.486 7914.732 - 7965.145: 90.3995% ( 17) 00:07:56.486 7965.145 - 8015.557: 90.5176% ( 22) 00:07:56.486 8015.557 - 8065.969: 90.6143% ( 18) 00:07:56.486 8065.969 - 8116.382: 90.7055% ( 17) 00:07:56.486 8116.382 - 8166.794: 90.8129% ( 20) 00:07:56.486 8166.794 - 8217.206: 90.9364% ( 23) 00:07:56.486 8217.206 - 8267.618: 91.0492% ( 21) 00:07:56.486 8267.618 - 8318.031: 91.1834% ( 25) 00:07:56.486 8318.031 - 8368.443: 91.3230% ( 26) 00:07:56.486 8368.443 - 8418.855: 91.4465% ( 23) 00:07:56.486 8418.855 - 8469.268: 91.5915% ( 27) 00:07:56.486 8469.268 - 8519.680: 91.7365% ( 27) 00:07:56.486 8519.680 - 8570.092: 91.8653% ( 24) 00:07:56.486 8570.092 - 8620.505: 92.0157% ( 28) 00:07:56.486 8620.505 - 8670.917: 92.1553% ( 26) 00:07:56.486 8670.917 - 8721.329: 92.2841% ( 24) 00:07:56.486 8721.329 - 8771.742: 92.4399% ( 29) 00:07:56.486 8771.742 - 8822.154: 92.6009% ( 30) 00:07:56.487 8822.154 - 8872.566: 92.7298% ( 24) 00:07:56.487 8872.566 - 8922.978: 92.8748% ( 27) 00:07:56.487 8922.978 - 8973.391: 93.0412% ( 31) 00:07:56.487 8973.391 - 9023.803: 93.1970% ( 29) 00:07:56.487 9023.803 - 9074.215: 93.3366% ( 26) 00:07:56.487 9074.215 - 9124.628: 93.4976% ( 30) 00:07:56.487 9124.628 - 9175.040: 93.6372% ( 26) 00:07:56.487 9175.040 - 9225.452: 93.7500% ( 21) 00:07:56.487 9225.452 - 9275.865: 93.8789% ( 24) 00:07:56.487 9275.865 - 9326.277: 93.9970% ( 22) 00:07:56.487 9326.277 - 9376.689: 94.1312% ( 25) 00:07:56.487 9376.689 - 9427.102: 94.2655% ( 25) 00:07:56.487 9427.102 - 9477.514: 94.4104% ( 27) 00:07:56.487 9477.514 - 9527.926: 94.5769% ( 31) 00:07:56.487 9527.926 - 9578.338: 94.7219% ( 27) 00:07:56.487 9578.338 - 9628.751: 94.8937% ( 32) 00:07:56.487 9628.751 - 9679.163: 95.0548% ( 30) 00:07:56.487 9679.163 - 9729.575: 95.2373% ( 34) 00:07:56.487 9729.575 - 9779.988: 95.3662% ( 24) 00:07:56.487 9779.988 - 9830.400: 95.5326% ( 31) 00:07:56.487 9830.400 - 9880.812: 95.6830% ( 28) 00:07:56.487 9880.812 - 9931.225: 95.8548% ( 32) 00:07:56.487 9931.225 - 9981.637: 96.0320% ( 33) 00:07:56.487 9981.637 - 10032.049: 96.1823% ( 28) 00:07:56.487 10032.049 - 10082.462: 96.3381% ( 29) 00:07:56.487 10082.462 - 10132.874: 96.4777% ( 26) 00:07:56.487 10132.874 - 10183.286: 96.6226% ( 27) 00:07:56.487 10183.286 - 10233.698: 96.7569% ( 25) 00:07:56.487 10233.698 - 10284.111: 96.8911% ( 25) 00:07:56.487 10284.111 - 10334.523: 97.0092% ( 22) 00:07:56.487 10334.523 - 10384.935: 97.1113% ( 19) 00:07:56.487 10384.935 - 10435.348: 97.2025% ( 17) 00:07:56.487 10435.348 - 10485.760: 97.2670% ( 12) 00:07:56.487 10485.760 - 10536.172: 97.3368% ( 13) 00:07:56.487 10536.172 - 10586.585: 97.3797% ( 8) 00:07:56.487 10586.585 - 10636.997: 97.4227% ( 8) 00:07:56.487 10636.997 - 10687.409: 97.4442% ( 4) 00:07:56.487 10687.409 - 10737.822: 97.4710% ( 5) 00:07:56.487 10737.822 - 10788.234: 97.4871% ( 3) 00:07:56.487 10788.234 - 10838.646: 97.5354% ( 9) 00:07:56.487 10838.646 - 10889.058: 97.6106% ( 14) 00:07:56.487 10889.058 - 10939.471: 97.6589% ( 9) 00:07:56.487 10939.471 - 10989.883: 97.7073% ( 9) 00:07:56.487 10989.883 - 11040.295: 97.7610% ( 10) 00:07:56.487 11040.295 - 11090.708: 97.8254% ( 12) 00:07:56.487 11090.708 - 11141.120: 97.9059% ( 15) 00:07:56.487 11141.120 - 11191.532: 97.9650% ( 11) 00:07:56.487 11191.532 - 11241.945: 98.0241% ( 11) 00:07:56.487 11241.945 - 11292.357: 98.0831% ( 11) 00:07:56.487 11292.357 - 11342.769: 98.1314% ( 9) 00:07:56.487 11342.769 - 11393.182: 98.1798% ( 9) 00:07:56.487 11393.182 - 11443.594: 98.2281% ( 9) 00:07:56.487 11443.594 - 11494.006: 98.2710% ( 8) 00:07:56.487 11494.006 - 11544.418: 98.3301% ( 11) 00:07:56.487 11544.418 - 11594.831: 98.3945% ( 12) 00:07:56.487 11594.831 - 11645.243: 98.4590% ( 12) 00:07:56.487 11645.243 - 11695.655: 98.5180% ( 11) 00:07:56.487 11695.655 - 11746.068: 98.5771% ( 11) 00:07:56.487 11746.068 - 11796.480: 98.6362% ( 11) 00:07:56.487 11796.480 - 11846.892: 98.6738% ( 7) 00:07:56.487 11846.892 - 11897.305: 98.7060% ( 6) 00:07:56.487 11897.305 - 11947.717: 98.7382% ( 6) 00:07:56.487 11947.717 - 11998.129: 98.7704% ( 6) 00:07:56.487 11998.129 - 12048.542: 98.8026% ( 6) 00:07:56.487 12048.542 - 12098.954: 98.8295% ( 5) 00:07:56.487 12098.954 - 12149.366: 98.8671% ( 7) 00:07:56.487 12149.366 - 12199.778: 98.8939% ( 5) 00:07:56.487 12199.778 - 12250.191: 98.9154% ( 4) 00:07:56.487 12250.191 - 12300.603: 98.9315% ( 3) 00:07:56.487 12300.603 - 12351.015: 98.9530% ( 4) 00:07:56.487 12351.015 - 12401.428: 98.9691% ( 3) 00:07:56.487 13611.323 - 13712.148: 99.0067% ( 7) 00:07:56.487 13712.148 - 13812.972: 99.0281% ( 4) 00:07:56.487 13812.972 - 13913.797: 99.0442% ( 3) 00:07:56.487 13913.797 - 14014.622: 99.0604% ( 3) 00:07:56.487 14014.622 - 14115.446: 99.0818% ( 4) 00:07:56.487 14115.446 - 14216.271: 99.1087% ( 5) 00:07:56.487 14216.271 - 14317.095: 99.1302% ( 4) 00:07:56.487 14317.095 - 14417.920: 99.1463% ( 3) 00:07:56.487 14417.920 - 14518.745: 99.1624% ( 3) 00:07:56.487 14518.745 - 14619.569: 99.1785% ( 3) 00:07:56.487 14619.569 - 14720.394: 99.2000% ( 4) 00:07:56.487 14720.394 - 14821.218: 99.2214% ( 4) 00:07:56.487 14821.218 - 14922.043: 99.2375% ( 3) 00:07:56.487 14922.043 - 15022.868: 99.2590% ( 4) 00:07:56.487 15022.868 - 15123.692: 99.2805% ( 4) 00:07:56.487 15123.692 - 15224.517: 99.3020% ( 4) 00:07:56.487 15224.517 - 15325.342: 99.3127% ( 2) 00:07:56.487 22282.240 - 22383.065: 99.3235% ( 2) 00:07:56.487 22383.065 - 22483.889: 99.3288% ( 1) 00:07:56.487 22483.889 - 22584.714: 99.3449% ( 3) 00:07:56.487 22584.714 - 22685.538: 99.3557% ( 2) 00:07:56.487 22685.538 - 22786.363: 99.3718% ( 3) 00:07:56.487 22786.363 - 22887.188: 99.3879% ( 3) 00:07:56.487 22887.188 - 22988.012: 99.4040% ( 3) 00:07:56.487 22988.012 - 23088.837: 99.4147% ( 2) 00:07:56.487 23088.837 - 23189.662: 99.4308% ( 3) 00:07:56.487 23189.662 - 23290.486: 99.4416% ( 2) 00:07:56.487 23290.486 - 23391.311: 99.4577% ( 3) 00:07:56.487 23391.311 - 23492.135: 99.4738% ( 3) 00:07:56.487 23492.135 - 23592.960: 99.4953% ( 4) 00:07:56.487 23592.960 - 23693.785: 99.5114% ( 3) 00:07:56.487 23693.785 - 23794.609: 99.5221% ( 2) 00:07:56.487 23794.609 - 23895.434: 99.5382% ( 3) 00:07:56.487 23895.434 - 23996.258: 99.5543% ( 3) 00:07:56.487 23996.258 - 24097.083: 99.5704% ( 3) 00:07:56.487 24097.083 - 24197.908: 99.5812% ( 2) 00:07:56.487 24197.908 - 24298.732: 99.5973% ( 3) 00:07:56.487 24298.732 - 24399.557: 99.6134% ( 3) 00:07:56.487 24399.557 - 24500.382: 99.6241% ( 2) 00:07:56.487 24500.382 - 24601.206: 99.6402% ( 3) 00:07:56.487 24601.206 - 24702.031: 99.6510% ( 2) 00:07:56.487 24702.031 - 24802.855: 99.6564% ( 1) 00:07:56.487 31860.578 - 32062.228: 99.6617% ( 1) 00:07:56.487 32062.228 - 32263.877: 99.7047% ( 8) 00:07:56.487 32263.877 - 32465.526: 99.7530% ( 9) 00:07:56.487 32465.526 - 32667.175: 99.7960% ( 8) 00:07:56.487 32667.175 - 32868.825: 99.8443% ( 9) 00:07:56.487 32868.825 - 33070.474: 99.8872% ( 8) 00:07:56.487 33070.474 - 33272.123: 99.9302% ( 8) 00:07:56.487 33272.123 - 33473.772: 99.9732% ( 8) 00:07:56.487 33473.772 - 33675.422: 100.0000% ( 5) 00:07:56.487 00:07:56.487 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:56.487 ============================================================================== 00:07:56.487 Range in us Cumulative IO count 00:07:56.487 5747.003 - 5772.209: 0.0107% ( 2) 00:07:56.487 5772.209 - 5797.415: 0.1181% ( 20) 00:07:56.487 5797.415 - 5822.622: 0.4349% ( 59) 00:07:56.487 5822.622 - 5847.828: 0.8054% ( 69) 00:07:56.487 5847.828 - 5873.034: 1.3907% ( 109) 00:07:56.487 5873.034 - 5898.240: 2.3518% ( 179) 00:07:56.487 5898.240 - 5923.446: 3.5653% ( 226) 00:07:56.487 5923.446 - 5948.652: 5.2942% ( 322) 00:07:56.487 5948.652 - 5973.858: 7.4205% ( 396) 00:07:56.487 5973.858 - 5999.065: 9.3696% ( 363) 00:07:56.487 5999.065 - 6024.271: 11.3724% ( 373) 00:07:56.487 6024.271 - 6049.477: 13.5900% ( 413) 00:07:56.487 6049.477 - 6074.683: 15.9848% ( 446) 00:07:56.487 6074.683 - 6099.889: 18.3902% ( 448) 00:07:56.487 6099.889 - 6125.095: 20.7850% ( 446) 00:07:56.487 6125.095 - 6150.302: 22.9381% ( 401) 00:07:56.487 6150.302 - 6175.508: 25.1396% ( 410) 00:07:56.487 6175.508 - 6200.714: 27.4753% ( 435) 00:07:56.487 6200.714 - 6225.920: 29.8378% ( 440) 00:07:56.487 6225.920 - 6251.126: 32.2165% ( 443) 00:07:56.487 6251.126 - 6276.332: 34.6435% ( 452) 00:07:56.487 6276.332 - 6301.538: 37.1188% ( 461) 00:07:56.488 6301.538 - 6326.745: 39.5296% ( 449) 00:07:56.488 6326.745 - 6351.951: 42.0318% ( 466) 00:07:56.488 6351.951 - 6377.157: 44.3997% ( 441) 00:07:56.488 6377.157 - 6402.363: 46.8696% ( 460) 00:07:56.488 6402.363 - 6427.569: 49.3288% ( 458) 00:07:56.488 6427.569 - 6452.775: 51.7934% ( 459) 00:07:56.488 6452.775 - 6503.188: 56.8030% ( 933) 00:07:56.488 6503.188 - 6553.600: 61.8718% ( 944) 00:07:56.488 6553.600 - 6604.012: 66.8063% ( 919) 00:07:56.488 6604.012 - 6654.425: 71.5260% ( 879) 00:07:56.488 6654.425 - 6704.837: 75.6229% ( 763) 00:07:56.488 6704.837 - 6755.249: 78.9143% ( 613) 00:07:56.488 6755.249 - 6805.662: 81.2876% ( 442) 00:07:56.488 6805.662 - 6856.074: 82.8447% ( 290) 00:07:56.488 6856.074 - 6906.486: 83.9562% ( 207) 00:07:56.488 6906.486 - 6956.898: 84.7509% ( 148) 00:07:56.488 6956.898 - 7007.311: 85.4757% ( 135) 00:07:56.488 7007.311 - 7057.723: 86.0449% ( 106) 00:07:56.488 7057.723 - 7108.135: 86.5496% ( 94) 00:07:56.488 7108.135 - 7158.548: 87.0006% ( 84) 00:07:56.488 7158.548 - 7208.960: 87.4034% ( 75) 00:07:56.488 7208.960 - 7259.372: 87.8114% ( 76) 00:07:56.488 7259.372 - 7309.785: 88.1551% ( 64) 00:07:56.488 7309.785 - 7360.197: 88.4611% ( 57) 00:07:56.488 7360.197 - 7410.609: 88.7564% ( 55) 00:07:56.488 7410.609 - 7461.022: 89.0303% ( 51) 00:07:56.488 7461.022 - 7511.434: 89.2665% ( 44) 00:07:56.488 7511.434 - 7561.846: 89.4974% ( 43) 00:07:56.488 7561.846 - 7612.258: 89.6907% ( 36) 00:07:56.488 7612.258 - 7662.671: 89.8679% ( 33) 00:07:56.488 7662.671 - 7713.083: 89.9968% ( 24) 00:07:56.488 7713.083 - 7763.495: 90.1042% ( 20) 00:07:56.488 7763.495 - 7813.908: 90.2169% ( 21) 00:07:56.488 7813.908 - 7864.320: 90.3136% ( 18) 00:07:56.488 7864.320 - 7914.732: 90.4908% ( 33) 00:07:56.488 7914.732 - 7965.145: 90.6143% ( 23) 00:07:56.488 7965.145 - 8015.557: 90.7216% ( 20) 00:07:56.488 8015.557 - 8065.969: 90.8183% ( 18) 00:07:56.488 8065.969 - 8116.382: 90.9203% ( 19) 00:07:56.488 8116.382 - 8166.794: 91.0223% ( 19) 00:07:56.488 8166.794 - 8217.206: 91.1136% ( 17) 00:07:56.488 8217.206 - 8267.618: 91.2210% ( 20) 00:07:56.488 8267.618 - 8318.031: 91.3445% ( 23) 00:07:56.488 8318.031 - 8368.443: 91.4626% ( 22) 00:07:56.488 8368.443 - 8418.855: 91.5754% ( 21) 00:07:56.488 8418.855 - 8469.268: 91.6828% ( 20) 00:07:56.488 8469.268 - 8519.680: 91.7955% ( 21) 00:07:56.488 8519.680 - 8570.092: 91.9298% ( 25) 00:07:56.488 8570.092 - 8620.505: 92.0801% ( 28) 00:07:56.488 8620.505 - 8670.917: 92.2251% ( 27) 00:07:56.488 8670.917 - 8721.329: 92.3593% ( 25) 00:07:56.488 8721.329 - 8771.742: 92.4936% ( 25) 00:07:56.488 8771.742 - 8822.154: 92.6546% ( 30) 00:07:56.488 8822.154 - 8872.566: 92.7835% ( 24) 00:07:56.488 8872.566 - 8922.978: 92.9070% ( 23) 00:07:56.488 8922.978 - 8973.391: 93.0520% ( 27) 00:07:56.488 8973.391 - 9023.803: 93.2184% ( 31) 00:07:56.488 9023.803 - 9074.215: 93.4010% ( 34) 00:07:56.488 9074.215 - 9124.628: 93.5782% ( 33) 00:07:56.488 9124.628 - 9175.040: 93.7554% ( 33) 00:07:56.488 9175.040 - 9225.452: 93.9487% ( 36) 00:07:56.488 9225.452 - 9275.865: 94.1420% ( 36) 00:07:56.488 9275.865 - 9326.277: 94.3460% ( 38) 00:07:56.488 9326.277 - 9376.689: 94.5447% ( 37) 00:07:56.488 9376.689 - 9427.102: 94.7272% ( 34) 00:07:56.488 9427.102 - 9477.514: 94.9152% ( 35) 00:07:56.488 9477.514 - 9527.926: 95.0977% ( 34) 00:07:56.488 9527.926 - 9578.338: 95.2588% ( 30) 00:07:56.488 9578.338 - 9628.751: 95.4360% ( 33) 00:07:56.488 9628.751 - 9679.163: 95.6078% ( 32) 00:07:56.488 9679.163 - 9729.575: 95.7743% ( 31) 00:07:56.488 9729.575 - 9779.988: 95.9568% ( 34) 00:07:56.488 9779.988 - 9830.400: 96.0642% ( 20) 00:07:56.488 9830.400 - 9880.812: 96.1609% ( 18) 00:07:56.488 9880.812 - 9931.225: 96.2629% ( 19) 00:07:56.488 9931.225 - 9981.637: 96.3542% ( 17) 00:07:56.488 9981.637 - 10032.049: 96.4240% ( 13) 00:07:56.488 10032.049 - 10082.462: 96.4884% ( 12) 00:07:56.488 10082.462 - 10132.874: 96.5582% ( 13) 00:07:56.488 10132.874 - 10183.286: 96.6173% ( 11) 00:07:56.488 10183.286 - 10233.698: 96.6817% ( 12) 00:07:56.488 10233.698 - 10284.111: 96.7300% ( 9) 00:07:56.488 10284.111 - 10334.523: 96.7837% ( 10) 00:07:56.488 10334.523 - 10384.935: 96.8320% ( 9) 00:07:56.488 10384.935 - 10435.348: 96.8804% ( 9) 00:07:56.488 10435.348 - 10485.760: 96.9180% ( 7) 00:07:56.488 10485.760 - 10536.172: 96.9555% ( 7) 00:07:56.488 10536.172 - 10586.585: 96.9931% ( 7) 00:07:56.488 10586.585 - 10636.997: 97.0361% ( 8) 00:07:56.488 10636.997 - 10687.409: 97.0790% ( 8) 00:07:56.488 10687.409 - 10737.822: 97.1327% ( 10) 00:07:56.488 10737.822 - 10788.234: 97.2025% ( 13) 00:07:56.488 10788.234 - 10838.646: 97.2777% ( 14) 00:07:56.488 10838.646 - 10889.058: 97.3690% ( 17) 00:07:56.488 10889.058 - 10939.471: 97.4280% ( 11) 00:07:56.488 10939.471 - 10989.883: 97.4764% ( 9) 00:07:56.488 10989.883 - 11040.295: 97.5838% ( 20) 00:07:56.488 11040.295 - 11090.708: 97.6858% ( 19) 00:07:56.488 11090.708 - 11141.120: 97.7663% ( 15) 00:07:56.488 11141.120 - 11191.532: 97.8361% ( 13) 00:07:56.488 11191.532 - 11241.945: 97.9167% ( 15) 00:07:56.488 11241.945 - 11292.357: 97.9972% ( 15) 00:07:56.488 11292.357 - 11342.769: 98.0885% ( 17) 00:07:56.488 11342.769 - 11393.182: 98.1529% ( 12) 00:07:56.488 11393.182 - 11443.594: 98.2335% ( 15) 00:07:56.488 11443.594 - 11494.006: 98.3140% ( 15) 00:07:56.488 11494.006 - 11544.418: 98.3838% ( 13) 00:07:56.488 11544.418 - 11594.831: 98.4643% ( 15) 00:07:56.488 11594.831 - 11645.243: 98.5610% ( 18) 00:07:56.488 11645.243 - 11695.655: 98.6362% ( 14) 00:07:56.488 11695.655 - 11746.068: 98.7006% ( 12) 00:07:56.488 11746.068 - 11796.480: 98.7597% ( 11) 00:07:56.488 11796.480 - 11846.892: 98.8134% ( 10) 00:07:56.488 11846.892 - 11897.305: 98.8509% ( 7) 00:07:56.488 11897.305 - 11947.717: 98.8832% ( 6) 00:07:56.488 11947.717 - 11998.129: 98.9100% ( 5) 00:07:56.488 11998.129 - 12048.542: 98.9369% ( 5) 00:07:56.488 12048.542 - 12098.954: 98.9422% ( 1) 00:07:56.488 12098.954 - 12149.366: 98.9530% ( 2) 00:07:56.488 12149.366 - 12199.778: 98.9637% ( 2) 00:07:56.488 12199.778 - 12250.191: 98.9691% ( 1) 00:07:56.488 12905.551 - 13006.375: 98.9959% ( 5) 00:07:56.488 13006.375 - 13107.200: 99.0120% ( 3) 00:07:56.488 13107.200 - 13208.025: 99.0281% ( 3) 00:07:56.488 13208.025 - 13308.849: 99.0496% ( 4) 00:07:56.488 13308.849 - 13409.674: 99.0711% ( 4) 00:07:56.488 13409.674 - 13510.498: 99.0872% ( 3) 00:07:56.488 13510.498 - 13611.323: 99.1087% ( 4) 00:07:56.488 13611.323 - 13712.148: 99.1302% ( 4) 00:07:56.488 13712.148 - 13812.972: 99.1463% ( 3) 00:07:56.488 13812.972 - 13913.797: 99.1677% ( 4) 00:07:56.488 13913.797 - 14014.622: 99.1892% ( 4) 00:07:56.488 14014.622 - 14115.446: 99.2107% ( 4) 00:07:56.488 14115.446 - 14216.271: 99.2268% ( 3) 00:07:56.488 14216.271 - 14317.095: 99.2483% ( 4) 00:07:56.488 14317.095 - 14417.920: 99.2644% ( 3) 00:07:56.488 14417.920 - 14518.745: 99.2859% ( 4) 00:07:56.488 14518.745 - 14619.569: 99.3020% ( 3) 00:07:56.488 14619.569 - 14720.394: 99.3127% ( 2) 00:07:56.488 21475.643 - 21576.468: 99.3181% ( 1) 00:07:56.488 21576.468 - 21677.292: 99.3503% ( 6) 00:07:56.488 21677.292 - 21778.117: 99.3664% ( 3) 00:07:56.488 21778.117 - 21878.942: 99.3718% ( 1) 00:07:56.488 21878.942 - 21979.766: 99.3825% ( 2) 00:07:56.488 21979.766 - 22080.591: 99.3986% ( 3) 00:07:56.488 22080.591 - 22181.415: 99.4094% ( 2) 00:07:56.488 22181.415 - 22282.240: 99.4201% ( 2) 00:07:56.488 22282.240 - 22383.065: 99.4362% ( 3) 00:07:56.488 22383.065 - 22483.889: 99.4523% ( 3) 00:07:56.488 22483.889 - 22584.714: 99.4684% ( 3) 00:07:56.488 22584.714 - 22685.538: 99.4792% ( 2) 00:07:56.488 22685.538 - 22786.363: 99.4953% ( 3) 00:07:56.488 22786.363 - 22887.188: 99.5114% ( 3) 00:07:56.488 22887.188 - 22988.012: 99.5221% ( 2) 00:07:56.488 22988.012 - 23088.837: 99.5382% ( 3) 00:07:56.488 23088.837 - 23189.662: 99.5543% ( 3) 00:07:56.488 23189.662 - 23290.486: 99.5651% ( 2) 00:07:56.488 23290.486 - 23391.311: 99.5812% ( 3) 00:07:56.488 23391.311 - 23492.135: 99.5973% ( 3) 00:07:56.488 23492.135 - 23592.960: 99.6080% ( 2) 00:07:56.488 23592.960 - 23693.785: 99.6241% ( 3) 00:07:56.488 23693.785 - 23794.609: 99.6349% ( 2) 00:07:56.488 23794.609 - 23895.434: 99.6510% ( 3) 00:07:56.488 23895.434 - 23996.258: 99.6564% ( 1) 00:07:56.488 30650.683 - 30852.332: 99.6886% ( 6) 00:07:56.488 30852.332 - 31053.982: 99.7262% ( 7) 00:07:56.488 31053.982 - 31255.631: 99.7691% ( 8) 00:07:56.488 31255.631 - 31457.280: 99.8121% ( 8) 00:07:56.488 31457.280 - 31658.929: 99.8604% ( 9) 00:07:56.488 31658.929 - 31860.578: 99.9034% ( 8) 00:07:56.488 31860.578 - 32062.228: 99.9463% ( 8) 00:07:56.488 32062.228 - 32263.877: 99.9946% ( 9) 00:07:56.488 32263.877 - 32465.526: 100.0000% ( 1) 00:07:56.488 00:07:56.488 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:56.488 ============================================================================== 00:07:56.488 Range in us Cumulative IO count 00:07:56.488 5721.797 - 5747.003: 0.0215% ( 4) 00:07:56.488 5747.003 - 5772.209: 0.0859% ( 12) 00:07:56.488 5772.209 - 5797.415: 0.2363% ( 28) 00:07:56.488 5797.415 - 5822.622: 0.4457% ( 39) 00:07:56.488 5822.622 - 5847.828: 0.6873% ( 45) 00:07:56.488 5847.828 - 5873.034: 1.3585% ( 125) 00:07:56.488 5873.034 - 5898.240: 2.4162% ( 197) 00:07:56.488 5898.240 - 5923.446: 3.9841% ( 292) 00:07:56.489 5923.446 - 5948.652: 5.5842% ( 298) 00:07:56.489 5948.652 - 5973.858: 7.4420% ( 346) 00:07:56.489 5973.858 - 5999.065: 9.4609% ( 376) 00:07:56.489 5999.065 - 6024.271: 11.4315% ( 367) 00:07:56.489 6024.271 - 6049.477: 13.6168% ( 407) 00:07:56.489 6049.477 - 6074.683: 15.9203% ( 429) 00:07:56.489 6074.683 - 6099.889: 18.2721% ( 438) 00:07:56.489 6099.889 - 6125.095: 20.4628% ( 408) 00:07:56.489 6125.095 - 6150.302: 22.7556% ( 427) 00:07:56.489 6150.302 - 6175.508: 25.0107% ( 420) 00:07:56.489 6175.508 - 6200.714: 27.3357% ( 433) 00:07:56.489 6200.714 - 6225.920: 29.7197% ( 444) 00:07:56.489 6225.920 - 6251.126: 32.1843% ( 459) 00:07:56.489 6251.126 - 6276.332: 34.5737% ( 445) 00:07:56.489 6276.332 - 6301.538: 37.0543% ( 462) 00:07:56.489 6301.538 - 6326.745: 39.4652% ( 449) 00:07:56.489 6326.745 - 6351.951: 41.9244% ( 458) 00:07:56.489 6351.951 - 6377.157: 44.4373% ( 468) 00:07:56.489 6377.157 - 6402.363: 46.8911% ( 457) 00:07:56.489 6402.363 - 6427.569: 49.4094% ( 469) 00:07:56.489 6427.569 - 6452.775: 51.8954% ( 463) 00:07:56.489 6452.775 - 6503.188: 56.8084% ( 915) 00:07:56.489 6503.188 - 6553.600: 61.7483% ( 920) 00:07:56.489 6553.600 - 6604.012: 66.6345% ( 910) 00:07:56.489 6604.012 - 6654.425: 71.4669% ( 900) 00:07:56.489 6654.425 - 6704.837: 75.5960% ( 769) 00:07:56.489 6704.837 - 6755.249: 78.7640% ( 590) 00:07:56.489 6755.249 - 6805.662: 80.9547% ( 408) 00:07:56.489 6805.662 - 6856.074: 82.6138% ( 309) 00:07:56.489 6856.074 - 6906.486: 83.7736% ( 216) 00:07:56.489 6906.486 - 6956.898: 84.5576% ( 146) 00:07:56.489 6956.898 - 7007.311: 85.2556% ( 130) 00:07:56.489 7007.311 - 7057.723: 85.7979% ( 101) 00:07:56.489 7057.723 - 7108.135: 86.2650% ( 87) 00:07:56.489 7108.135 - 7158.548: 86.6892% ( 79) 00:07:56.489 7158.548 - 7208.960: 87.0006% ( 58) 00:07:56.489 7208.960 - 7259.372: 87.3067% ( 57) 00:07:56.489 7259.372 - 7309.785: 87.6181% ( 58) 00:07:56.489 7309.785 - 7360.197: 87.8920% ( 51) 00:07:56.489 7360.197 - 7410.609: 88.1765% ( 53) 00:07:56.489 7410.609 - 7461.022: 88.4235% ( 46) 00:07:56.489 7461.022 - 7511.434: 88.6437% ( 41) 00:07:56.489 7511.434 - 7561.846: 88.8155% ( 32) 00:07:56.489 7561.846 - 7612.258: 88.9605% ( 27) 00:07:56.489 7612.258 - 7662.671: 89.1108% ( 28) 00:07:56.489 7662.671 - 7713.083: 89.2612% ( 28) 00:07:56.489 7713.083 - 7763.495: 89.4384% ( 33) 00:07:56.489 7763.495 - 7813.908: 89.6424% ( 38) 00:07:56.489 7813.908 - 7864.320: 89.8357% ( 36) 00:07:56.489 7864.320 - 7914.732: 90.0075% ( 32) 00:07:56.489 7914.732 - 7965.145: 90.1632% ( 29) 00:07:56.489 7965.145 - 8015.557: 90.2921% ( 24) 00:07:56.489 8015.557 - 8065.969: 90.4317% ( 26) 00:07:56.489 8065.969 - 8116.382: 90.6357% ( 38) 00:07:56.489 8116.382 - 8166.794: 90.8237% ( 35) 00:07:56.489 8166.794 - 8217.206: 91.0062% ( 34) 00:07:56.489 8217.206 - 8267.618: 91.1619% ( 29) 00:07:56.489 8267.618 - 8318.031: 91.3284% ( 31) 00:07:56.489 8318.031 - 8368.443: 91.5002% ( 32) 00:07:56.489 8368.443 - 8418.855: 91.6828% ( 34) 00:07:56.489 8418.855 - 8469.268: 91.8439% ( 30) 00:07:56.489 8469.268 - 8519.680: 91.9996% ( 29) 00:07:56.489 8519.680 - 8570.092: 92.1445% ( 27) 00:07:56.489 8570.092 - 8620.505: 92.3003% ( 29) 00:07:56.489 8620.505 - 8670.917: 92.4774% ( 33) 00:07:56.489 8670.917 - 8721.329: 92.6493% ( 32) 00:07:56.489 8721.329 - 8771.742: 92.7942% ( 27) 00:07:56.489 8771.742 - 8822.154: 92.9714% ( 33) 00:07:56.489 8822.154 - 8872.566: 93.1271% ( 29) 00:07:56.489 8872.566 - 8922.978: 93.3097% ( 34) 00:07:56.489 8922.978 - 8973.391: 93.5030% ( 36) 00:07:56.489 8973.391 - 9023.803: 93.7017% ( 37) 00:07:56.489 9023.803 - 9074.215: 93.8574% ( 29) 00:07:56.489 9074.215 - 9124.628: 94.0561% ( 37) 00:07:56.489 9124.628 - 9175.040: 94.2064% ( 28) 00:07:56.489 9175.040 - 9225.452: 94.3621% ( 29) 00:07:56.489 9225.452 - 9275.865: 94.5178% ( 29) 00:07:56.489 9275.865 - 9326.277: 94.6735% ( 29) 00:07:56.489 9326.277 - 9376.689: 94.8507% ( 33) 00:07:56.489 9376.689 - 9427.102: 95.0118% ( 30) 00:07:56.489 9427.102 - 9477.514: 95.1514% ( 26) 00:07:56.489 9477.514 - 9527.926: 95.2857% ( 25) 00:07:56.489 9527.926 - 9578.338: 95.4038% ( 22) 00:07:56.489 9578.338 - 9628.751: 95.5112% ( 20) 00:07:56.489 9628.751 - 9679.163: 95.6454% ( 25) 00:07:56.489 9679.163 - 9729.575: 95.7689% ( 23) 00:07:56.489 9729.575 - 9779.988: 95.8763% ( 20) 00:07:56.489 9779.988 - 9830.400: 95.9729% ( 18) 00:07:56.489 9830.400 - 9880.812: 96.0642% ( 17) 00:07:56.489 9880.812 - 9931.225: 96.1394% ( 14) 00:07:56.489 9931.225 - 9981.637: 96.2199% ( 15) 00:07:56.489 9981.637 - 10032.049: 96.3220% ( 19) 00:07:56.489 10032.049 - 10082.462: 96.4240% ( 19) 00:07:56.489 10082.462 - 10132.874: 96.5260% ( 19) 00:07:56.489 10132.874 - 10183.286: 96.5958% ( 13) 00:07:56.489 10183.286 - 10233.698: 96.6710% ( 14) 00:07:56.489 10233.698 - 10284.111: 96.7569% ( 16) 00:07:56.489 10284.111 - 10334.523: 96.8589% ( 19) 00:07:56.489 10334.523 - 10384.935: 96.9448% ( 16) 00:07:56.489 10384.935 - 10435.348: 97.0468% ( 19) 00:07:56.489 10435.348 - 10485.760: 97.1327% ( 16) 00:07:56.489 10485.760 - 10536.172: 97.2133% ( 15) 00:07:56.489 10536.172 - 10586.585: 97.2831% ( 13) 00:07:56.489 10586.585 - 10636.997: 97.3636% ( 15) 00:07:56.489 10636.997 - 10687.409: 97.4334% ( 13) 00:07:56.489 10687.409 - 10737.822: 97.5032% ( 13) 00:07:56.489 10737.822 - 10788.234: 97.5784% ( 14) 00:07:56.489 10788.234 - 10838.646: 97.6536% ( 14) 00:07:56.489 10838.646 - 10889.058: 97.7234% ( 13) 00:07:56.489 10889.058 - 10939.471: 97.7932% ( 13) 00:07:56.489 10939.471 - 10989.883: 97.8576% ( 12) 00:07:56.489 10989.883 - 11040.295: 97.9006% ( 8) 00:07:56.489 11040.295 - 11090.708: 97.9596% ( 11) 00:07:56.489 11090.708 - 11141.120: 98.0831% ( 23) 00:07:56.489 11141.120 - 11191.532: 98.1583% ( 14) 00:07:56.489 11191.532 - 11241.945: 98.1959% ( 7) 00:07:56.489 11241.945 - 11292.357: 98.2388% ( 8) 00:07:56.489 11292.357 - 11342.769: 98.2979% ( 11) 00:07:56.489 11342.769 - 11393.182: 98.3570% ( 11) 00:07:56.489 11393.182 - 11443.594: 98.4429% ( 16) 00:07:56.489 11443.594 - 11494.006: 98.4858% ( 8) 00:07:56.489 11494.006 - 11544.418: 98.5341% ( 9) 00:07:56.489 11544.418 - 11594.831: 98.5878% ( 10) 00:07:56.489 11594.831 - 11645.243: 98.6201% ( 6) 00:07:56.489 11645.243 - 11695.655: 98.6630% ( 8) 00:07:56.489 11695.655 - 11746.068: 98.6899% ( 5) 00:07:56.489 11746.068 - 11796.480: 98.7221% ( 6) 00:07:56.489 11796.480 - 11846.892: 98.7597% ( 7) 00:07:56.489 11846.892 - 11897.305: 98.7811% ( 4) 00:07:56.489 11897.305 - 11947.717: 98.8026% ( 4) 00:07:56.489 11947.717 - 11998.129: 98.8295% ( 5) 00:07:56.489 11998.129 - 12048.542: 98.8563% ( 5) 00:07:56.489 12048.542 - 12098.954: 98.8885% ( 6) 00:07:56.489 12098.954 - 12149.366: 98.9207% ( 6) 00:07:56.489 12149.366 - 12199.778: 98.9530% ( 6) 00:07:56.489 12199.778 - 12250.191: 98.9852% ( 6) 00:07:56.489 12250.191 - 12300.603: 99.0067% ( 4) 00:07:56.489 12300.603 - 12351.015: 99.0389% ( 6) 00:07:56.489 12351.015 - 12401.428: 99.0550% ( 3) 00:07:56.489 12401.428 - 12451.840: 99.0657% ( 2) 00:07:56.489 12451.840 - 12502.252: 99.0765% ( 2) 00:07:56.489 12502.252 - 12552.665: 99.0872% ( 2) 00:07:56.489 12552.665 - 12603.077: 99.0979% ( 2) 00:07:56.489 12603.077 - 12653.489: 99.1087% ( 2) 00:07:56.489 12653.489 - 12703.902: 99.1140% ( 1) 00:07:56.489 12703.902 - 12754.314: 99.1248% ( 2) 00:07:56.489 12754.314 - 12804.726: 99.1355% ( 2) 00:07:56.489 12804.726 - 12855.138: 99.1463% ( 2) 00:07:56.489 12855.138 - 12905.551: 99.1516% ( 1) 00:07:56.489 12905.551 - 13006.375: 99.1731% ( 4) 00:07:56.489 13006.375 - 13107.200: 99.1946% ( 4) 00:07:56.489 13107.200 - 13208.025: 99.2161% ( 4) 00:07:56.489 13208.025 - 13308.849: 99.2322% ( 3) 00:07:56.489 13308.849 - 13409.674: 99.2537% ( 4) 00:07:56.489 13409.674 - 13510.498: 99.2751% ( 4) 00:07:56.489 13510.498 - 13611.323: 99.2912% ( 3) 00:07:56.489 13611.323 - 13712.148: 99.3073% ( 3) 00:07:56.489 13712.148 - 13812.972: 99.3127% ( 1) 00:07:56.489 20669.046 - 20769.871: 99.3235% ( 2) 00:07:56.489 20769.871 - 20870.695: 99.3449% ( 4) 00:07:56.489 20870.695 - 20971.520: 99.3503% ( 1) 00:07:56.489 20971.520 - 21072.345: 99.3664% ( 3) 00:07:56.489 21072.345 - 21173.169: 99.3825% ( 3) 00:07:56.489 21173.169 - 21273.994: 99.3933% ( 2) 00:07:56.489 21273.994 - 21374.818: 99.4147% ( 4) 00:07:56.489 21374.818 - 21475.643: 99.4308% ( 3) 00:07:56.489 21475.643 - 21576.468: 99.4416% ( 2) 00:07:56.489 21576.468 - 21677.292: 99.4577% ( 3) 00:07:56.489 21677.292 - 21778.117: 99.4684% ( 2) 00:07:56.489 21778.117 - 21878.942: 99.4845% ( 3) 00:07:56.489 21878.942 - 21979.766: 99.4953% ( 2) 00:07:56.489 21979.766 - 22080.591: 99.5114% ( 3) 00:07:56.489 22080.591 - 22181.415: 99.5221% ( 2) 00:07:56.489 22181.415 - 22282.240: 99.5382% ( 3) 00:07:56.489 22282.240 - 22383.065: 99.5543% ( 3) 00:07:56.489 22383.065 - 22483.889: 99.5651% ( 2) 00:07:56.489 22483.889 - 22584.714: 99.5812% ( 3) 00:07:56.489 22584.714 - 22685.538: 99.5919% ( 2) 00:07:56.489 22685.538 - 22786.363: 99.6080% ( 3) 00:07:56.489 22786.363 - 22887.188: 99.6241% ( 3) 00:07:56.489 22887.188 - 22988.012: 99.6402% ( 3) 00:07:56.489 22988.012 - 23088.837: 99.6510% ( 2) 00:07:56.489 23088.837 - 23189.662: 99.6564% ( 1) 00:07:56.489 29239.138 - 29440.788: 99.6671% ( 2) 00:07:56.489 29440.788 - 29642.437: 99.7101% ( 8) 00:07:56.489 29642.437 - 29844.086: 99.7584% ( 9) 00:07:56.489 29844.086 - 30045.735: 99.8013% ( 8) 00:07:56.490 30045.735 - 30247.385: 99.8443% ( 8) 00:07:56.490 30247.385 - 30449.034: 99.8872% ( 8) 00:07:56.490 30449.034 - 30650.683: 99.9356% ( 9) 00:07:56.490 30650.683 - 30852.332: 99.9785% ( 8) 00:07:56.490 30852.332 - 31053.982: 100.0000% ( 4) 00:07:56.490 00:07:56.769 19:51:40 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:57.704 Initializing NVMe Controllers 00:07:57.704 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:57.704 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.704 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.704 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.704 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:57.704 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:57.705 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:57.705 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:57.705 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:57.705 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:57.705 Initialization complete. Launching workers. 00:07:57.705 ======================================================== 00:07:57.705 Latency(us) 00:07:57.705 Device Information : IOPS MiB/s Average min max 00:07:57.705 PCIE (0000:00:10.0) NSID 1 from core 0: 17487.72 204.93 7329.55 5813.38 34576.80 00:07:57.705 PCIE (0000:00:11.0) NSID 1 from core 0: 17487.72 204.93 7318.53 5907.38 32760.81 00:07:57.705 PCIE (0000:00:13.0) NSID 1 from core 0: 17487.72 204.93 7307.20 5851.32 31722.18 00:07:57.705 PCIE (0000:00:12.0) NSID 1 from core 0: 17487.72 204.93 7296.40 5956.69 30166.02 00:07:57.705 PCIE (0000:00:12.0) NSID 2 from core 0: 17487.72 204.93 7287.66 5830.39 28746.96 00:07:57.705 PCIE (0000:00:12.0) NSID 3 from core 0: 17551.54 205.68 7250.00 5834.62 22508.99 00:07:57.705 ======================================================== 00:07:57.705 Total : 104990.12 1230.35 7298.19 5813.38 34576.80 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6074.683us 00:07:57.705 10.00000% : 6402.363us 00:07:57.705 25.00000% : 6604.012us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7309.785us 00:07:57.705 90.00000% : 8418.855us 00:07:57.705 95.00000% : 9427.102us 00:07:57.705 98.00000% : 11040.295us 00:07:57.705 99.00000% : 13006.375us 00:07:57.705 99.50000% : 28230.892us 00:07:57.705 99.90000% : 34280.369us 00:07:57.705 99.99000% : 34683.668us 00:07:57.705 99.99900% : 34683.668us 00:07:57.705 99.99990% : 34683.668us 00:07:57.705 99.99999% : 34683.668us 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6225.920us 00:07:57.705 10.00000% : 6452.775us 00:07:57.705 25.00000% : 6654.425us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7259.372us 00:07:57.705 90.00000% : 8469.268us 00:07:57.705 95.00000% : 9225.452us 00:07:57.705 98.00000% : 11040.295us 00:07:57.705 99.00000% : 13712.148us 00:07:57.705 99.50000% : 26617.698us 00:07:57.705 99.90000% : 32465.526us 00:07:57.705 99.99000% : 32868.825us 00:07:57.705 99.99900% : 32868.825us 00:07:57.705 99.99990% : 32868.825us 00:07:57.705 99.99999% : 32868.825us 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6200.714us 00:07:57.705 10.00000% : 6503.188us 00:07:57.705 25.00000% : 6654.425us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7259.372us 00:07:57.705 90.00000% : 8519.680us 00:07:57.705 95.00000% : 9225.452us 00:07:57.705 98.00000% : 11090.708us 00:07:57.705 99.00000% : 13712.148us 00:07:57.705 99.50000% : 25811.102us 00:07:57.705 99.90000% : 31457.280us 00:07:57.705 99.99000% : 31860.578us 00:07:57.705 99.99900% : 31860.578us 00:07:57.705 99.99990% : 31860.578us 00:07:57.705 99.99999% : 31860.578us 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6225.920us 00:07:57.705 10.00000% : 6503.188us 00:07:57.705 25.00000% : 6654.425us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7259.372us 00:07:57.705 90.00000% : 8469.268us 00:07:57.705 95.00000% : 9326.277us 00:07:57.705 98.00000% : 11090.708us 00:07:57.705 99.00000% : 13611.323us 00:07:57.705 99.50000% : 24097.083us 00:07:57.705 99.90000% : 29844.086us 00:07:57.705 99.99000% : 30247.385us 00:07:57.705 99.99900% : 30247.385us 00:07:57.705 99.99990% : 30247.385us 00:07:57.705 99.99999% : 30247.385us 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6225.920us 00:07:57.705 10.00000% : 6503.188us 00:07:57.705 25.00000% : 6654.425us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7259.372us 00:07:57.705 90.00000% : 8519.680us 00:07:57.705 95.00000% : 9477.514us 00:07:57.705 98.00000% : 11443.594us 00:07:57.705 99.00000% : 13107.200us 00:07:57.705 99.50000% : 22584.714us 00:07:57.705 99.90000% : 28432.542us 00:07:57.705 99.99000% : 28835.840us 00:07:57.705 99.99900% : 28835.840us 00:07:57.705 99.99990% : 28835.840us 00:07:57.705 99.99999% : 28835.840us 00:07:57.705 00:07:57.705 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:57.705 ================================================================================= 00:07:57.705 1.00000% : 6200.714us 00:07:57.705 10.00000% : 6503.188us 00:07:57.705 25.00000% : 6654.425us 00:07:57.705 50.00000% : 6906.486us 00:07:57.705 75.00000% : 7208.960us 00:07:57.705 90.00000% : 8570.092us 00:07:57.705 95.00000% : 9628.751us 00:07:57.705 98.00000% : 11393.182us 00:07:57.705 99.00000% : 12502.252us 00:07:57.705 99.50000% : 16333.588us 00:07:57.705 99.90000% : 22080.591us 00:07:57.705 99.99000% : 22483.889us 00:07:57.705 99.99900% : 22584.714us 00:07:57.705 99.99990% : 22584.714us 00:07:57.705 99.99999% : 22584.714us 00:07:57.705 00:07:57.705 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:57.705 ============================================================================== 00:07:57.705 Range in us Cumulative IO count 00:07:57.705 5797.415 - 5822.622: 0.0114% ( 2) 00:07:57.705 5822.622 - 5847.828: 0.0171% ( 1) 00:07:57.705 5847.828 - 5873.034: 0.0513% ( 6) 00:07:57.705 5873.034 - 5898.240: 0.0912% ( 7) 00:07:57.705 5898.240 - 5923.446: 0.1426% ( 9) 00:07:57.705 5923.446 - 5948.652: 0.2167% ( 13) 00:07:57.705 5948.652 - 5973.858: 0.3650% ( 26) 00:07:57.705 5973.858 - 5999.065: 0.4961% ( 23) 00:07:57.705 5999.065 - 6024.271: 0.6045% ( 19) 00:07:57.705 6024.271 - 6049.477: 0.7527% ( 26) 00:07:57.705 6049.477 - 6074.683: 1.0265% ( 48) 00:07:57.705 6074.683 - 6099.889: 1.3059% ( 49) 00:07:57.705 6099.889 - 6125.095: 1.5910% ( 50) 00:07:57.705 6125.095 - 6150.302: 1.9332% ( 60) 00:07:57.705 6150.302 - 6175.508: 2.3266% ( 69) 00:07:57.705 6175.508 - 6200.714: 2.7828% ( 80) 00:07:57.705 6200.714 - 6225.920: 3.3189% ( 94) 00:07:57.705 6225.920 - 6251.126: 3.9747% ( 115) 00:07:57.705 6251.126 - 6276.332: 4.6989% ( 127) 00:07:57.705 6276.332 - 6301.538: 5.4916% ( 139) 00:07:57.705 6301.538 - 6326.745: 6.6036% ( 195) 00:07:57.705 6326.745 - 6351.951: 7.5787% ( 171) 00:07:57.705 6351.951 - 6377.157: 8.7876% ( 212) 00:07:57.705 6377.157 - 6402.363: 10.0194% ( 216) 00:07:57.705 6402.363 - 6427.569: 11.2682% ( 219) 00:07:57.705 6427.569 - 6452.775: 13.1615% ( 332) 00:07:57.705 6452.775 - 6503.188: 17.3187% ( 729) 00:07:57.705 6503.188 - 6553.600: 21.2363% ( 687) 00:07:57.705 6553.600 - 6604.012: 25.1654% ( 689) 00:07:57.705 6604.012 - 6654.425: 29.4708% ( 755) 00:07:57.705 6654.425 - 6704.837: 34.3921% ( 863) 00:07:57.705 6704.837 - 6755.249: 39.1708% ( 838) 00:07:57.705 6755.249 - 6805.662: 43.5618% ( 770) 00:07:57.705 6805.662 - 6856.074: 48.0896% ( 794) 00:07:57.705 6856.074 - 6906.486: 52.6631% ( 802) 00:07:57.705 6906.486 - 6956.898: 56.4325% ( 661) 00:07:57.705 6956.898 - 7007.311: 60.4129% ( 698) 00:07:57.705 7007.311 - 7057.723: 63.8458% ( 602) 00:07:57.705 7057.723 - 7108.135: 66.5944% ( 482) 00:07:57.705 7108.135 - 7158.548: 69.1264% ( 444) 00:07:57.705 7158.548 - 7208.960: 71.5271% ( 421) 00:07:57.705 7208.960 - 7259.372: 73.5002% ( 346) 00:07:57.705 7259.372 - 7309.785: 75.0057% ( 264) 00:07:57.705 7309.785 - 7360.197: 76.5682% ( 274) 00:07:57.705 7360.197 - 7410.609: 77.8171% ( 219) 00:07:57.705 7410.609 - 7461.022: 79.1001% ( 225) 00:07:57.705 7461.022 - 7511.434: 80.2292% ( 198) 00:07:57.705 7511.434 - 7561.846: 81.3869% ( 203) 00:07:57.705 7561.846 - 7612.258: 82.4304% ( 183) 00:07:57.705 7612.258 - 7662.671: 83.1033% ( 118) 00:07:57.705 7662.671 - 7713.083: 83.8846% ( 137) 00:07:57.705 7713.083 - 7763.495: 84.6145% ( 128) 00:07:57.705 7763.495 - 7813.908: 85.2931% ( 119) 00:07:57.705 7813.908 - 7864.320: 85.6752% ( 67) 00:07:57.705 7864.320 - 7914.732: 86.1941% ( 91) 00:07:57.705 7914.732 - 7965.145: 86.5819% ( 68) 00:07:57.705 7965.145 - 8015.557: 86.9925% ( 72) 00:07:57.705 8015.557 - 8065.969: 87.3289% ( 59) 00:07:57.705 8065.969 - 8116.382: 87.5855% ( 45) 00:07:57.705 8116.382 - 8166.794: 87.9676% ( 67) 00:07:57.705 8166.794 - 8217.206: 88.3782% ( 72) 00:07:57.705 8217.206 - 8267.618: 88.8914% ( 90) 00:07:57.705 8267.618 - 8318.031: 89.2450% ( 62) 00:07:57.705 8318.031 - 8368.443: 89.6156% ( 65) 00:07:57.705 8368.443 - 8418.855: 90.0091% ( 69) 00:07:57.705 8418.855 - 8469.268: 90.3969% ( 68) 00:07:57.705 8469.268 - 8519.680: 90.7048% ( 54) 00:07:57.705 8519.680 - 8570.092: 91.0185% ( 55) 00:07:57.705 8570.092 - 8620.505: 91.3321% ( 55) 00:07:57.705 8620.505 - 8670.917: 91.6629% ( 58) 00:07:57.705 8670.917 - 8721.329: 91.9423% ( 49) 00:07:57.705 8721.329 - 8771.742: 92.2502% ( 54) 00:07:57.705 8771.742 - 8822.154: 92.5240% ( 48) 00:07:57.705 8822.154 - 8872.566: 92.7749% ( 44) 00:07:57.705 8872.566 - 8922.978: 93.0429% ( 47) 00:07:57.705 8922.978 - 8973.391: 93.3394% ( 52) 00:07:57.705 8973.391 - 9023.803: 93.5903% ( 44) 00:07:57.705 9023.803 - 9074.215: 93.9439% ( 62) 00:07:57.705 9074.215 - 9124.628: 94.1492% ( 36) 00:07:57.705 9124.628 - 9175.040: 94.3773% ( 40) 00:07:57.705 9175.040 - 9225.452: 94.5655% ( 33) 00:07:57.705 9225.452 - 9275.865: 94.6909% ( 22) 00:07:57.705 9275.865 - 9326.277: 94.8278% ( 24) 00:07:57.705 9326.277 - 9376.689: 94.9532% ( 22) 00:07:57.705 9376.689 - 9427.102: 95.1471% ( 34) 00:07:57.705 9427.102 - 9477.514: 95.2954% ( 26) 00:07:57.705 9477.514 - 9527.926: 95.3809% ( 15) 00:07:57.705 9527.926 - 9578.338: 95.4608% ( 14) 00:07:57.705 9578.338 - 9628.751: 95.5862% ( 22) 00:07:57.705 9628.751 - 9679.163: 95.6490% ( 11) 00:07:57.705 9679.163 - 9729.575: 95.8143% ( 29) 00:07:57.705 9729.575 - 9779.988: 95.9626% ( 26) 00:07:57.705 9779.988 - 9830.400: 96.1052% ( 25) 00:07:57.705 9830.400 - 9880.812: 96.1850% ( 14) 00:07:57.705 9880.812 - 9931.225: 96.2591% ( 13) 00:07:57.706 9931.225 - 9981.637: 96.3276% ( 12) 00:07:57.706 9981.637 - 10032.049: 96.3960% ( 12) 00:07:57.706 10032.049 - 10082.462: 96.4986% ( 18) 00:07:57.706 10082.462 - 10132.874: 96.5671% ( 12) 00:07:57.706 10132.874 - 10183.286: 96.6412% ( 13) 00:07:57.706 10183.286 - 10233.698: 96.7324% ( 16) 00:07:57.706 10233.698 - 10284.111: 96.8009% ( 12) 00:07:57.706 10284.111 - 10334.523: 96.9206% ( 21) 00:07:57.706 10334.523 - 10384.935: 97.0632% ( 25) 00:07:57.706 10384.935 - 10435.348: 97.2172% ( 27) 00:07:57.706 10435.348 - 10485.760: 97.3084% ( 16) 00:07:57.706 10485.760 - 10536.172: 97.3939% ( 15) 00:07:57.706 10536.172 - 10586.585: 97.4738% ( 14) 00:07:57.706 10586.585 - 10636.997: 97.5365% ( 11) 00:07:57.706 10636.997 - 10687.409: 97.5878% ( 9) 00:07:57.706 10687.409 - 10737.822: 97.6277% ( 7) 00:07:57.706 10737.822 - 10788.234: 97.7931% ( 29) 00:07:57.706 10788.234 - 10838.646: 97.8672% ( 13) 00:07:57.706 10838.646 - 10889.058: 97.8901% ( 4) 00:07:57.706 10889.058 - 10939.471: 97.9243% ( 6) 00:07:57.706 10939.471 - 10989.883: 97.9699% ( 8) 00:07:57.706 10989.883 - 11040.295: 98.0041% ( 6) 00:07:57.706 11040.295 - 11090.708: 98.0383% ( 6) 00:07:57.706 11090.708 - 11141.120: 98.0611% ( 4) 00:07:57.706 11141.120 - 11191.532: 98.1010% ( 7) 00:07:57.706 11191.532 - 11241.945: 98.1125% ( 2) 00:07:57.706 11241.945 - 11292.357: 98.1524% ( 7) 00:07:57.706 11292.357 - 11342.769: 98.2094% ( 10) 00:07:57.706 11342.769 - 11393.182: 98.2835% ( 13) 00:07:57.706 11393.182 - 11443.594: 98.3292% ( 8) 00:07:57.706 11443.594 - 11494.006: 98.4375% ( 19) 00:07:57.706 11494.006 - 11544.418: 98.5116% ( 13) 00:07:57.706 11544.418 - 11594.831: 98.5401% ( 5) 00:07:57.706 11594.831 - 11645.243: 98.5744% ( 6) 00:07:57.706 11645.243 - 11695.655: 98.5972% ( 4) 00:07:57.706 11695.655 - 11746.068: 98.6200% ( 4) 00:07:57.706 11746.068 - 11796.480: 98.6371% ( 3) 00:07:57.706 11796.480 - 11846.892: 98.6713% ( 6) 00:07:57.706 11846.892 - 11897.305: 98.6884% ( 3) 00:07:57.706 11897.305 - 11947.717: 98.6998% ( 2) 00:07:57.706 11947.717 - 11998.129: 98.7226% ( 4) 00:07:57.706 11998.129 - 12048.542: 98.7397% ( 3) 00:07:57.706 12048.542 - 12098.954: 98.7568% ( 3) 00:07:57.706 12098.954 - 12149.366: 98.7797% ( 4) 00:07:57.706 12149.366 - 12199.778: 98.7968% ( 3) 00:07:57.706 12199.778 - 12250.191: 98.8253% ( 5) 00:07:57.706 12250.191 - 12300.603: 98.8424% ( 3) 00:07:57.706 12300.603 - 12351.015: 98.8481% ( 1) 00:07:57.706 12351.015 - 12401.428: 98.8652% ( 3) 00:07:57.706 12401.428 - 12451.840: 98.8766% ( 2) 00:07:57.706 12451.840 - 12502.252: 98.8880% ( 2) 00:07:57.706 12502.252 - 12552.665: 98.9051% ( 3) 00:07:57.706 12552.665 - 12603.077: 98.9336% ( 5) 00:07:57.706 12603.077 - 12653.489: 98.9450% ( 2) 00:07:57.706 12653.489 - 12703.902: 98.9507% ( 1) 00:07:57.706 12703.902 - 12754.314: 98.9621% ( 2) 00:07:57.706 12754.314 - 12804.726: 98.9735% ( 2) 00:07:57.706 12804.726 - 12855.138: 98.9849% ( 2) 00:07:57.706 12855.138 - 12905.551: 98.9964% ( 2) 00:07:57.706 12905.551 - 13006.375: 99.0249% ( 5) 00:07:57.706 13006.375 - 13107.200: 99.0420% ( 3) 00:07:57.706 13107.200 - 13208.025: 99.0762% ( 6) 00:07:57.706 13208.025 - 13308.849: 99.0933% ( 3) 00:07:57.706 13308.849 - 13409.674: 99.1332% ( 7) 00:07:57.706 13409.674 - 13510.498: 99.1674% ( 6) 00:07:57.706 13510.498 - 13611.323: 99.1959% ( 5) 00:07:57.706 13611.323 - 13712.148: 99.2130% ( 3) 00:07:57.706 13712.148 - 13812.972: 99.2302% ( 3) 00:07:57.706 13812.972 - 13913.797: 99.2359% ( 1) 00:07:57.706 13913.797 - 14014.622: 99.2473% ( 2) 00:07:57.706 14014.622 - 14115.446: 99.2701% ( 4) 00:07:57.706 27020.997 - 27222.646: 99.3043% ( 6) 00:07:57.706 27222.646 - 27424.295: 99.3442% ( 7) 00:07:57.706 27424.295 - 27625.945: 99.3841% ( 7) 00:07:57.706 27625.945 - 27827.594: 99.4297% ( 8) 00:07:57.706 27827.594 - 28029.243: 99.4697% ( 7) 00:07:57.706 28029.243 - 28230.892: 99.5153% ( 8) 00:07:57.706 28230.892 - 28432.542: 99.5609% ( 8) 00:07:57.706 28432.542 - 28634.191: 99.6065% ( 8) 00:07:57.706 28634.191 - 28835.840: 99.6350% ( 5) 00:07:57.706 32667.175 - 32868.825: 99.6464% ( 2) 00:07:57.706 32868.825 - 33070.474: 99.6921% ( 8) 00:07:57.706 33070.474 - 33272.123: 99.7320% ( 7) 00:07:57.706 33272.123 - 33473.772: 99.7776% ( 8) 00:07:57.706 33473.772 - 33675.422: 99.8175% ( 7) 00:07:57.706 33675.422 - 33877.071: 99.8574% ( 7) 00:07:57.706 33877.071 - 34078.720: 99.8974% ( 7) 00:07:57.706 34078.720 - 34280.369: 99.9430% ( 8) 00:07:57.706 34280.369 - 34482.018: 99.9772% ( 6) 00:07:57.706 34482.018 - 34683.668: 100.0000% ( 4) 00:07:57.706 00:07:57.706 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:57.706 ============================================================================== 00:07:57.706 Range in us Cumulative IO count 00:07:57.706 5898.240 - 5923.446: 0.0057% ( 1) 00:07:57.706 5923.446 - 5948.652: 0.0114% ( 1) 00:07:57.706 5973.858 - 5999.065: 0.0228% ( 2) 00:07:57.706 5999.065 - 6024.271: 0.0513% ( 5) 00:07:57.706 6024.271 - 6049.477: 0.1083% ( 10) 00:07:57.706 6049.477 - 6074.683: 0.1426% ( 6) 00:07:57.706 6074.683 - 6099.889: 0.2224% ( 14) 00:07:57.706 6099.889 - 6125.095: 0.3250% ( 18) 00:07:57.706 6125.095 - 6150.302: 0.4676% ( 25) 00:07:57.706 6150.302 - 6175.508: 0.6444% ( 31) 00:07:57.706 6175.508 - 6200.714: 0.9181% ( 48) 00:07:57.706 6200.714 - 6225.920: 1.1747% ( 45) 00:07:57.706 6225.920 - 6251.126: 1.5226% ( 61) 00:07:57.706 6251.126 - 6276.332: 1.8875% ( 64) 00:07:57.706 6276.332 - 6301.538: 2.5833% ( 122) 00:07:57.706 6301.538 - 6326.745: 3.8435% ( 221) 00:07:57.706 6326.745 - 6351.951: 4.7730% ( 163) 00:07:57.706 6351.951 - 6377.157: 5.9478% ( 206) 00:07:57.706 6377.157 - 6402.363: 7.5046% ( 273) 00:07:57.706 6402.363 - 6427.569: 8.7762% ( 223) 00:07:57.706 6427.569 - 6452.775: 10.1791% ( 246) 00:07:57.706 6452.775 - 6503.188: 12.9847% ( 492) 00:07:57.706 6503.188 - 6553.600: 17.3700% ( 769) 00:07:57.706 6553.600 - 6604.012: 21.7667% ( 771) 00:07:57.706 6604.012 - 6654.425: 26.7735% ( 878) 00:07:57.706 6654.425 - 6704.837: 31.8716% ( 894) 00:07:57.706 6704.837 - 6755.249: 37.2206% ( 938) 00:07:57.706 6755.249 - 6805.662: 43.5504% ( 1110) 00:07:57.706 6805.662 - 6856.074: 48.8994% ( 938) 00:07:57.706 6856.074 - 6906.486: 54.3682% ( 959) 00:07:57.706 6906.486 - 6956.898: 58.7990% ( 777) 00:07:57.706 6956.898 - 7007.311: 62.8764% ( 715) 00:07:57.706 7007.311 - 7057.723: 67.0335% ( 729) 00:07:57.706 7057.723 - 7108.135: 70.7459% ( 651) 00:07:57.706 7108.135 - 7158.548: 72.9471% ( 386) 00:07:57.706 7158.548 - 7208.960: 74.4012% ( 255) 00:07:57.706 7208.960 - 7259.372: 75.8896% ( 261) 00:07:57.706 7259.372 - 7309.785: 77.1898% ( 228) 00:07:57.706 7309.785 - 7360.197: 78.3873% ( 210) 00:07:57.706 7360.197 - 7410.609: 79.6305% ( 218) 00:07:57.706 7410.609 - 7461.022: 80.4973% ( 152) 00:07:57.706 7461.022 - 7511.434: 81.2443% ( 131) 00:07:57.706 7511.434 - 7561.846: 82.1453% ( 158) 00:07:57.706 7561.846 - 7612.258: 82.9665% ( 144) 00:07:57.706 7612.258 - 7662.671: 83.6166% ( 114) 00:07:57.706 7662.671 - 7713.083: 84.1640% ( 96) 00:07:57.706 7713.083 - 7763.495: 84.8141% ( 114) 00:07:57.706 7763.495 - 7813.908: 85.2817% ( 82) 00:07:57.706 7813.908 - 7864.320: 85.7892% ( 89) 00:07:57.706 7864.320 - 7914.732: 86.1656% ( 66) 00:07:57.706 7914.732 - 7965.145: 86.5021% ( 59) 00:07:57.706 7965.145 - 8015.557: 86.7986% ( 52) 00:07:57.706 8015.557 - 8065.969: 87.0723% ( 48) 00:07:57.706 8065.969 - 8116.382: 87.3517% ( 49) 00:07:57.706 8116.382 - 8166.794: 87.7224% ( 65) 00:07:57.706 8166.794 - 8217.206: 88.1786% ( 80) 00:07:57.706 8217.206 - 8267.618: 88.7146% ( 94) 00:07:57.706 8267.618 - 8318.031: 89.2393% ( 92) 00:07:57.706 8318.031 - 8368.443: 89.7012% ( 81) 00:07:57.706 8368.443 - 8418.855: 89.9236% ( 39) 00:07:57.706 8418.855 - 8469.268: 90.1289% ( 36) 00:07:57.706 8469.268 - 8519.680: 90.3684% ( 42) 00:07:57.706 8519.680 - 8570.092: 90.6649% ( 52) 00:07:57.706 8570.092 - 8620.505: 91.1439% ( 84) 00:07:57.706 8620.505 - 8670.917: 91.4177% ( 48) 00:07:57.706 8670.917 - 8721.329: 91.7940% ( 66) 00:07:57.706 8721.329 - 8771.742: 92.3187% ( 92) 00:07:57.706 8771.742 - 8822.154: 92.8148% ( 87) 00:07:57.706 8822.154 - 8872.566: 93.1969% ( 67) 00:07:57.706 8872.566 - 8922.978: 93.5561% ( 63) 00:07:57.706 8922.978 - 8973.391: 94.0807% ( 92) 00:07:57.706 8973.391 - 9023.803: 94.4970% ( 73) 00:07:57.706 9023.803 - 9074.215: 94.6795% ( 32) 00:07:57.706 9074.215 - 9124.628: 94.8221% ( 25) 00:07:57.706 9124.628 - 9175.040: 94.9304% ( 19) 00:07:57.706 9175.040 - 9225.452: 95.0274% ( 17) 00:07:57.706 9225.452 - 9275.865: 95.1072% ( 14) 00:07:57.706 9275.865 - 9326.277: 95.1813% ( 13) 00:07:57.706 9326.277 - 9376.689: 95.2327% ( 9) 00:07:57.706 9376.689 - 9427.102: 95.4037% ( 30) 00:07:57.706 9427.102 - 9477.514: 95.5406% ( 24) 00:07:57.706 9477.514 - 9527.926: 95.6604% ( 21) 00:07:57.706 9527.926 - 9578.338: 95.7915% ( 23) 00:07:57.706 9578.338 - 9628.751: 95.9569% ( 29) 00:07:57.706 9628.751 - 9679.163: 96.1565% ( 35) 00:07:57.706 9679.163 - 9729.575: 96.2249% ( 12) 00:07:57.706 9729.575 - 9779.988: 96.2534% ( 5) 00:07:57.706 9779.988 - 9830.400: 96.2762% ( 4) 00:07:57.706 9830.400 - 9880.812: 96.2876% ( 2) 00:07:57.706 9880.812 - 9931.225: 96.3104% ( 4) 00:07:57.707 9931.225 - 9981.637: 96.3276% ( 3) 00:07:57.707 9981.637 - 10032.049: 96.3447% ( 3) 00:07:57.707 10032.049 - 10082.462: 96.3504% ( 1) 00:07:57.707 10233.698 - 10284.111: 96.3732% ( 4) 00:07:57.707 10284.111 - 10334.523: 96.4188% ( 8) 00:07:57.707 10334.523 - 10384.935: 96.5785% ( 28) 00:07:57.707 10384.935 - 10435.348: 96.7039% ( 22) 00:07:57.707 10435.348 - 10485.760: 96.7609% ( 10) 00:07:57.707 10485.760 - 10536.172: 96.8351% ( 13) 00:07:57.707 10536.172 - 10586.585: 96.9035% ( 12) 00:07:57.707 10586.585 - 10636.997: 97.0005% ( 17) 00:07:57.707 10636.997 - 10687.409: 97.0803% ( 14) 00:07:57.707 10687.409 - 10737.822: 97.1430% ( 11) 00:07:57.707 10737.822 - 10788.234: 97.2970% ( 27) 00:07:57.707 10788.234 - 10838.646: 97.5194% ( 39) 00:07:57.707 10838.646 - 10889.058: 97.6677% ( 26) 00:07:57.707 10889.058 - 10939.471: 97.8844% ( 38) 00:07:57.707 10939.471 - 10989.883: 97.9813% ( 17) 00:07:57.707 10989.883 - 11040.295: 98.0839% ( 18) 00:07:57.707 11040.295 - 11090.708: 98.1695% ( 15) 00:07:57.707 11090.708 - 11141.120: 98.2379% ( 12) 00:07:57.707 11141.120 - 11191.532: 98.2835% ( 8) 00:07:57.707 11191.532 - 11241.945: 98.4717% ( 33) 00:07:57.707 11241.945 - 11292.357: 98.5173% ( 8) 00:07:57.707 11292.357 - 11342.769: 98.5630% ( 8) 00:07:57.707 11342.769 - 11393.182: 98.5972% ( 6) 00:07:57.707 11393.182 - 11443.594: 98.6656% ( 12) 00:07:57.707 11443.594 - 11494.006: 98.7169% ( 9) 00:07:57.707 11494.006 - 11544.418: 98.7511% ( 6) 00:07:57.707 11544.418 - 11594.831: 98.7625% ( 2) 00:07:57.707 11594.831 - 11645.243: 98.7740% ( 2) 00:07:57.707 11645.243 - 11695.655: 98.7797% ( 1) 00:07:57.707 11695.655 - 11746.068: 98.7911% ( 2) 00:07:57.707 11746.068 - 11796.480: 98.8025% ( 2) 00:07:57.707 11796.480 - 11846.892: 98.8139% ( 2) 00:07:57.707 11846.892 - 11897.305: 98.8196% ( 1) 00:07:57.707 11897.305 - 11947.717: 98.8310% ( 2) 00:07:57.707 11947.717 - 11998.129: 98.8424% ( 2) 00:07:57.707 11998.129 - 12048.542: 98.8538% ( 2) 00:07:57.707 12048.542 - 12098.954: 98.8766% ( 4) 00:07:57.707 12098.954 - 12149.366: 98.8937% ( 3) 00:07:57.707 12149.366 - 12199.778: 98.9051% ( 2) 00:07:57.707 13409.674 - 13510.498: 98.9108% ( 1) 00:07:57.707 13510.498 - 13611.323: 98.9621% ( 9) 00:07:57.707 13611.323 - 13712.148: 99.0705% ( 19) 00:07:57.707 13712.148 - 13812.972: 99.2073% ( 24) 00:07:57.707 13812.972 - 13913.797: 99.2473% ( 7) 00:07:57.707 13913.797 - 14014.622: 99.2701% ( 4) 00:07:57.707 25407.803 - 25508.628: 99.2929% ( 4) 00:07:57.707 25508.628 - 25609.452: 99.3157% ( 4) 00:07:57.707 25609.452 - 25710.277: 99.3385% ( 4) 00:07:57.707 25710.277 - 25811.102: 99.3613% ( 4) 00:07:57.707 25811.102 - 26012.751: 99.4069% ( 8) 00:07:57.707 26012.751 - 26214.400: 99.4469% ( 7) 00:07:57.707 26214.400 - 26416.049: 99.4925% ( 8) 00:07:57.707 26416.049 - 26617.698: 99.5381% ( 8) 00:07:57.707 26617.698 - 26819.348: 99.5837% ( 8) 00:07:57.707 26819.348 - 27020.997: 99.6293% ( 8) 00:07:57.707 27020.997 - 27222.646: 99.6350% ( 1) 00:07:57.707 31053.982 - 31255.631: 99.6578% ( 4) 00:07:57.707 31255.631 - 31457.280: 99.7035% ( 8) 00:07:57.707 31457.280 - 31658.929: 99.7491% ( 8) 00:07:57.707 31658.929 - 31860.578: 99.7947% ( 8) 00:07:57.707 31860.578 - 32062.228: 99.8346% ( 7) 00:07:57.707 32062.228 - 32263.877: 99.8859% ( 9) 00:07:57.707 32263.877 - 32465.526: 99.9316% ( 8) 00:07:57.707 32465.526 - 32667.175: 99.9772% ( 8) 00:07:57.707 32667.175 - 32868.825: 100.0000% ( 4) 00:07:57.707 00:07:57.707 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:57.707 ============================================================================== 00:07:57.707 Range in us Cumulative IO count 00:07:57.707 5847.828 - 5873.034: 0.0057% ( 1) 00:07:57.707 5873.034 - 5898.240: 0.0171% ( 2) 00:07:57.707 5923.446 - 5948.652: 0.0228% ( 1) 00:07:57.707 5948.652 - 5973.858: 0.0399% ( 3) 00:07:57.707 5973.858 - 5999.065: 0.0513% ( 2) 00:07:57.707 5999.065 - 6024.271: 0.0741% ( 4) 00:07:57.707 6024.271 - 6049.477: 0.1026% ( 5) 00:07:57.707 6049.477 - 6074.683: 0.1711% ( 12) 00:07:57.707 6074.683 - 6099.889: 0.2566% ( 15) 00:07:57.707 6099.889 - 6125.095: 0.4220% ( 29) 00:07:57.707 6125.095 - 6150.302: 0.6330% ( 37) 00:07:57.707 6150.302 - 6175.508: 0.8611% ( 40) 00:07:57.707 6175.508 - 6200.714: 1.1975% ( 59) 00:07:57.707 6200.714 - 6225.920: 1.5397% ( 60) 00:07:57.707 6225.920 - 6251.126: 1.9902% ( 79) 00:07:57.707 6251.126 - 6276.332: 2.4236% ( 76) 00:07:57.707 6276.332 - 6301.538: 3.1535% ( 128) 00:07:57.707 6301.538 - 6326.745: 4.2028% ( 184) 00:07:57.707 6326.745 - 6351.951: 5.3376% ( 199) 00:07:57.707 6351.951 - 6377.157: 6.3241% ( 173) 00:07:57.707 6377.157 - 6402.363: 7.2137% ( 156) 00:07:57.707 6402.363 - 6427.569: 8.3086% ( 192) 00:07:57.707 6427.569 - 6452.775: 9.4662% ( 203) 00:07:57.707 6452.775 - 6503.188: 12.7395% ( 574) 00:07:57.707 6503.188 - 6553.600: 17.1020% ( 765) 00:07:57.707 6553.600 - 6604.012: 21.8864% ( 839) 00:07:57.707 6604.012 - 6654.425: 26.8761% ( 875) 00:07:57.707 6654.425 - 6704.837: 32.4532% ( 978) 00:07:57.707 6704.837 - 6755.249: 38.3041% ( 1026) 00:07:57.707 6755.249 - 6805.662: 42.9859% ( 821) 00:07:57.707 6805.662 - 6856.074: 48.9849% ( 1052) 00:07:57.707 6856.074 - 6906.486: 54.5107% ( 969) 00:07:57.707 6906.486 - 6956.898: 59.6430% ( 900) 00:07:57.707 6956.898 - 7007.311: 63.2185% ( 627) 00:07:57.707 7007.311 - 7057.723: 66.9024% ( 646) 00:07:57.707 7057.723 - 7108.135: 70.1927% ( 577) 00:07:57.707 7108.135 - 7158.548: 72.4510% ( 396) 00:07:57.707 7158.548 - 7208.960: 74.8859% ( 427) 00:07:57.707 7208.960 - 7259.372: 76.8875% ( 351) 00:07:57.707 7259.372 - 7309.785: 78.2847% ( 245) 00:07:57.707 7309.785 - 7360.197: 79.5392% ( 220) 00:07:57.707 7360.197 - 7410.609: 80.6341% ( 192) 00:07:57.707 7410.609 - 7461.022: 81.3983% ( 134) 00:07:57.707 7461.022 - 7511.434: 82.4646% ( 187) 00:07:57.707 7511.434 - 7561.846: 83.2573% ( 139) 00:07:57.707 7561.846 - 7612.258: 83.9872% ( 128) 00:07:57.707 7612.258 - 7662.671: 84.6316% ( 113) 00:07:57.707 7662.671 - 7713.083: 85.0536% ( 74) 00:07:57.707 7713.083 - 7763.495: 85.3672% ( 55) 00:07:57.707 7763.495 - 7813.908: 85.6410% ( 48) 00:07:57.707 7813.908 - 7864.320: 85.9147% ( 48) 00:07:57.707 7864.320 - 7914.732: 86.1713% ( 45) 00:07:57.707 7914.732 - 7965.145: 86.5762% ( 71) 00:07:57.707 7965.145 - 8015.557: 86.8157% ( 42) 00:07:57.707 8015.557 - 8065.969: 87.0381% ( 39) 00:07:57.707 8065.969 - 8116.382: 87.2434% ( 36) 00:07:57.707 8116.382 - 8166.794: 87.5627% ( 56) 00:07:57.707 8166.794 - 8217.206: 87.9448% ( 67) 00:07:57.707 8217.206 - 8267.618: 88.2527% ( 54) 00:07:57.707 8267.618 - 8318.031: 88.5550% ( 53) 00:07:57.707 8318.031 - 8368.443: 88.8344% ( 49) 00:07:57.707 8368.443 - 8418.855: 89.3077% ( 83) 00:07:57.707 8418.855 - 8469.268: 89.7468% ( 77) 00:07:57.707 8469.268 - 8519.680: 90.1118% ( 64) 00:07:57.707 8519.680 - 8570.092: 90.5908% ( 84) 00:07:57.707 8570.092 - 8620.505: 91.1667% ( 101) 00:07:57.707 8620.505 - 8670.917: 91.5773% ( 72) 00:07:57.707 8670.917 - 8721.329: 91.8739% ( 52) 00:07:57.707 8721.329 - 8771.742: 92.2787% ( 71) 00:07:57.707 8771.742 - 8822.154: 92.5125% ( 41) 00:07:57.707 8822.154 - 8872.566: 92.8775% ( 64) 00:07:57.707 8872.566 - 8922.978: 93.4193% ( 95) 00:07:57.707 8922.978 - 8973.391: 93.9382% ( 91) 00:07:57.707 8973.391 - 9023.803: 94.1777% ( 42) 00:07:57.707 9023.803 - 9074.215: 94.4115% ( 41) 00:07:57.707 9074.215 - 9124.628: 94.7194% ( 54) 00:07:57.707 9124.628 - 9175.040: 94.9760% ( 45) 00:07:57.707 9175.040 - 9225.452: 95.0673% ( 16) 00:07:57.707 9225.452 - 9275.865: 95.1471% ( 14) 00:07:57.707 9275.865 - 9326.277: 95.2270% ( 14) 00:07:57.707 9326.277 - 9376.689: 95.3239% ( 17) 00:07:57.707 9376.689 - 9427.102: 95.4779% ( 27) 00:07:57.707 9427.102 - 9477.514: 95.5349% ( 10) 00:07:57.707 9477.514 - 9527.926: 95.5862% ( 9) 00:07:57.707 9527.926 - 9578.338: 95.6490% ( 11) 00:07:57.707 9578.338 - 9628.751: 95.7003% ( 9) 00:07:57.707 9628.751 - 9679.163: 95.7573% ( 10) 00:07:57.707 9679.163 - 9729.575: 95.8371% ( 14) 00:07:57.707 9729.575 - 9779.988: 96.0196% ( 32) 00:07:57.707 9779.988 - 9830.400: 96.0823% ( 11) 00:07:57.707 9830.400 - 9880.812: 96.1565% ( 13) 00:07:57.707 9880.812 - 9931.225: 96.2477% ( 16) 00:07:57.707 9931.225 - 9981.637: 96.3276% ( 14) 00:07:57.707 9981.637 - 10032.049: 96.4188% ( 16) 00:07:57.707 10032.049 - 10082.462: 96.4872% ( 12) 00:07:57.707 10082.462 - 10132.874: 96.5614% ( 13) 00:07:57.707 10132.874 - 10183.286: 96.6070% ( 8) 00:07:57.707 10183.286 - 10233.698: 96.6469% ( 7) 00:07:57.707 10233.698 - 10284.111: 96.6754% ( 5) 00:07:57.707 10284.111 - 10334.523: 96.7153% ( 7) 00:07:57.707 10334.523 - 10384.935: 96.7324% ( 3) 00:07:57.707 10384.935 - 10435.348: 96.8066% ( 13) 00:07:57.707 10435.348 - 10485.760: 96.8921% ( 15) 00:07:57.707 10485.760 - 10536.172: 96.9206% ( 5) 00:07:57.708 10536.172 - 10586.585: 96.9662% ( 8) 00:07:57.708 10586.585 - 10636.997: 97.0062% ( 7) 00:07:57.708 10636.997 - 10687.409: 97.0290% ( 4) 00:07:57.708 10687.409 - 10737.822: 97.0518% ( 4) 00:07:57.708 10737.822 - 10788.234: 97.1487% ( 17) 00:07:57.708 10788.234 - 10838.646: 97.2514% ( 18) 00:07:57.708 10838.646 - 10889.058: 97.4053% ( 27) 00:07:57.708 10889.058 - 10939.471: 97.5821% ( 31) 00:07:57.708 10939.471 - 10989.883: 97.6905% ( 19) 00:07:57.708 10989.883 - 11040.295: 97.8216% ( 23) 00:07:57.708 11040.295 - 11090.708: 98.0041% ( 32) 00:07:57.708 11090.708 - 11141.120: 98.2436% ( 42) 00:07:57.708 11141.120 - 11191.532: 98.4033% ( 28) 00:07:57.708 11191.532 - 11241.945: 98.5002% ( 17) 00:07:57.708 11241.945 - 11292.357: 98.5915% ( 16) 00:07:57.708 11292.357 - 11342.769: 98.6713% ( 14) 00:07:57.708 11342.769 - 11393.182: 98.7454% ( 13) 00:07:57.708 11393.182 - 11443.594: 98.7911% ( 8) 00:07:57.708 11443.594 - 11494.006: 98.8082% ( 3) 00:07:57.708 11494.006 - 11544.418: 98.8367% ( 5) 00:07:57.708 11544.418 - 11594.831: 98.8595% ( 4) 00:07:57.708 11594.831 - 11645.243: 98.8709% ( 2) 00:07:57.708 11645.243 - 11695.655: 98.8823% ( 2) 00:07:57.708 11695.655 - 11746.068: 98.8937% ( 2) 00:07:57.708 11746.068 - 11796.480: 98.8994% ( 1) 00:07:57.708 11796.480 - 11846.892: 98.9051% ( 1) 00:07:57.708 13409.674 - 13510.498: 98.9222% ( 3) 00:07:57.708 13510.498 - 13611.323: 98.9792% ( 10) 00:07:57.708 13611.323 - 13712.148: 99.1389% ( 28) 00:07:57.708 13712.148 - 13812.972: 99.1788% ( 7) 00:07:57.708 13812.972 - 13913.797: 99.2188% ( 7) 00:07:57.708 13913.797 - 14014.622: 99.2587% ( 7) 00:07:57.708 14014.622 - 14115.446: 99.2701% ( 2) 00:07:57.708 24500.382 - 24601.206: 99.2758% ( 1) 00:07:57.708 24601.206 - 24702.031: 99.2872% ( 2) 00:07:57.708 24702.031 - 24802.855: 99.3100% ( 4) 00:07:57.708 24802.855 - 24903.680: 99.3271% ( 3) 00:07:57.708 24903.680 - 25004.505: 99.3442% ( 3) 00:07:57.708 25004.505 - 25105.329: 99.3613% ( 3) 00:07:57.708 25105.329 - 25206.154: 99.3784% ( 3) 00:07:57.708 25206.154 - 25306.978: 99.3955% ( 3) 00:07:57.708 25306.978 - 25407.803: 99.4183% ( 4) 00:07:57.708 25407.803 - 25508.628: 99.4411% ( 4) 00:07:57.708 25508.628 - 25609.452: 99.4640% ( 4) 00:07:57.708 25609.452 - 25710.277: 99.4868% ( 4) 00:07:57.708 25710.277 - 25811.102: 99.5096% ( 4) 00:07:57.708 25811.102 - 26012.751: 99.5552% ( 8) 00:07:57.708 26012.751 - 26214.400: 99.6008% ( 8) 00:07:57.708 26214.400 - 26416.049: 99.6350% ( 6) 00:07:57.708 30045.735 - 30247.385: 99.6635% ( 5) 00:07:57.708 30247.385 - 30449.034: 99.7092% ( 8) 00:07:57.708 30449.034 - 30650.683: 99.7548% ( 8) 00:07:57.708 30650.683 - 30852.332: 99.8061% ( 9) 00:07:57.708 30852.332 - 31053.982: 99.8517% ( 8) 00:07:57.708 31053.982 - 31255.631: 99.8917% ( 7) 00:07:57.708 31255.631 - 31457.280: 99.9373% ( 8) 00:07:57.708 31457.280 - 31658.929: 99.9829% ( 8) 00:07:57.708 31658.929 - 31860.578: 100.0000% ( 3) 00:07:57.708 00:07:57.708 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:57.708 ============================================================================== 00:07:57.708 Range in us Cumulative IO count 00:07:57.708 5948.652 - 5973.858: 0.0057% ( 1) 00:07:57.708 5973.858 - 5999.065: 0.0114% ( 1) 00:07:57.708 5999.065 - 6024.271: 0.0171% ( 1) 00:07:57.708 6024.271 - 6049.477: 0.0399% ( 4) 00:07:57.708 6049.477 - 6074.683: 0.0570% ( 3) 00:07:57.708 6074.683 - 6099.889: 0.1255% ( 12) 00:07:57.708 6099.889 - 6125.095: 0.1768% ( 9) 00:07:57.708 6125.095 - 6150.302: 0.3307% ( 27) 00:07:57.708 6150.302 - 6175.508: 0.5189% ( 33) 00:07:57.708 6175.508 - 6200.714: 0.7641% ( 43) 00:07:57.708 6200.714 - 6225.920: 1.0721% ( 54) 00:07:57.708 6225.920 - 6251.126: 1.4199% ( 61) 00:07:57.708 6251.126 - 6276.332: 1.9104% ( 86) 00:07:57.708 6276.332 - 6301.538: 2.4977% ( 103) 00:07:57.708 6301.538 - 6326.745: 3.3588% ( 151) 00:07:57.708 6326.745 - 6351.951: 4.2427% ( 155) 00:07:57.708 6351.951 - 6377.157: 5.3889% ( 201) 00:07:57.708 6377.157 - 6402.363: 6.4553% ( 187) 00:07:57.708 6402.363 - 6427.569: 7.5559% ( 193) 00:07:57.708 6427.569 - 6452.775: 8.8104% ( 220) 00:07:57.708 6452.775 - 6503.188: 12.4487% ( 638) 00:07:57.708 6503.188 - 6553.600: 16.6686% ( 740) 00:07:57.708 6553.600 - 6604.012: 21.6469% ( 873) 00:07:57.708 6604.012 - 6654.425: 27.8114% ( 1081) 00:07:57.708 6654.425 - 6704.837: 33.5481% ( 1006) 00:07:57.708 6704.837 - 6755.249: 38.5322% ( 874) 00:07:57.708 6755.249 - 6805.662: 43.8412% ( 931) 00:07:57.708 6805.662 - 6856.074: 49.2758% ( 953) 00:07:57.708 6856.074 - 6906.486: 54.7217% ( 955) 00:07:57.708 6906.486 - 6956.898: 59.6658% ( 867) 00:07:57.708 6956.898 - 7007.311: 63.7089% ( 709) 00:07:57.708 7007.311 - 7057.723: 66.8510% ( 551) 00:07:57.708 7057.723 - 7108.135: 69.8107% ( 519) 00:07:57.708 7108.135 - 7158.548: 72.5821% ( 486) 00:07:57.708 7158.548 - 7208.960: 74.6978% ( 371) 00:07:57.708 7208.960 - 7259.372: 76.6880% ( 349) 00:07:57.708 7259.372 - 7309.785: 77.9881% ( 228) 00:07:57.708 7309.785 - 7360.197: 79.4708% ( 260) 00:07:57.708 7360.197 - 7410.609: 80.5999% ( 198) 00:07:57.708 7410.609 - 7461.022: 81.5865% ( 173) 00:07:57.708 7461.022 - 7511.434: 82.4875% ( 158) 00:07:57.708 7511.434 - 7561.846: 83.4284% ( 165) 00:07:57.708 7561.846 - 7612.258: 84.2609% ( 146) 00:07:57.708 7612.258 - 7662.671: 84.7457% ( 85) 00:07:57.708 7662.671 - 7713.083: 84.9852% ( 42) 00:07:57.708 7713.083 - 7763.495: 85.2247% ( 42) 00:07:57.708 7763.495 - 7813.908: 85.4984% ( 48) 00:07:57.708 7813.908 - 7864.320: 85.7550% ( 45) 00:07:57.708 7864.320 - 7914.732: 85.9774% ( 39) 00:07:57.708 7914.732 - 7965.145: 86.2911% ( 55) 00:07:57.708 7965.145 - 8015.557: 86.5705% ( 49) 00:07:57.708 8015.557 - 8065.969: 86.8499% ( 49) 00:07:57.708 8065.969 - 8116.382: 87.0324% ( 32) 00:07:57.708 8116.382 - 8166.794: 87.3289% ( 52) 00:07:57.708 8166.794 - 8217.206: 87.7338% ( 71) 00:07:57.708 8217.206 - 8267.618: 88.1444% ( 72) 00:07:57.708 8267.618 - 8318.031: 88.6918% ( 96) 00:07:57.708 8318.031 - 8368.443: 89.2906% ( 105) 00:07:57.708 8368.443 - 8418.855: 89.8837% ( 104) 00:07:57.708 8418.855 - 8469.268: 90.1631% ( 49) 00:07:57.708 8469.268 - 8519.680: 90.5281% ( 64) 00:07:57.708 8519.680 - 8570.092: 90.8360% ( 54) 00:07:57.708 8570.092 - 8620.505: 91.1839% ( 61) 00:07:57.708 8620.505 - 8670.917: 91.5146% ( 58) 00:07:57.708 8670.917 - 8721.329: 91.6971% ( 32) 00:07:57.708 8721.329 - 8771.742: 91.8910% ( 34) 00:07:57.708 8771.742 - 8822.154: 92.1761% ( 50) 00:07:57.708 8822.154 - 8872.566: 92.4783% ( 53) 00:07:57.708 8872.566 - 8922.978: 93.0942% ( 108) 00:07:57.708 8922.978 - 8973.391: 93.5675% ( 83) 00:07:57.708 8973.391 - 9023.803: 93.9496% ( 67) 00:07:57.708 9023.803 - 9074.215: 94.2062% ( 45) 00:07:57.708 9074.215 - 9124.628: 94.3887% ( 32) 00:07:57.708 9124.628 - 9175.040: 94.5312% ( 25) 00:07:57.708 9175.040 - 9225.452: 94.6909% ( 28) 00:07:57.708 9225.452 - 9275.865: 94.9190% ( 40) 00:07:57.708 9275.865 - 9326.277: 95.0559% ( 24) 00:07:57.708 9326.277 - 9376.689: 95.1699% ( 20) 00:07:57.708 9376.689 - 9427.102: 95.3068% ( 24) 00:07:57.708 9427.102 - 9477.514: 95.4037% ( 17) 00:07:57.708 9477.514 - 9527.926: 95.5634% ( 28) 00:07:57.708 9527.926 - 9578.338: 95.7174% ( 27) 00:07:57.708 9578.338 - 9628.751: 95.8371% ( 21) 00:07:57.708 9628.751 - 9679.163: 96.0880% ( 44) 00:07:57.708 9679.163 - 9729.575: 96.1622% ( 13) 00:07:57.708 9729.575 - 9779.988: 96.1964% ( 6) 00:07:57.708 9779.988 - 9830.400: 96.2306% ( 6) 00:07:57.708 9830.400 - 9880.812: 96.2762% ( 8) 00:07:57.708 9880.812 - 9931.225: 96.3504% ( 13) 00:07:57.708 9931.225 - 9981.637: 96.4587% ( 19) 00:07:57.708 9981.637 - 10032.049: 96.5500% ( 16) 00:07:57.708 10032.049 - 10082.462: 96.6754% ( 22) 00:07:57.708 10082.462 - 10132.874: 96.7609% ( 15) 00:07:57.708 10132.874 - 10183.286: 96.8351% ( 13) 00:07:57.708 10183.286 - 10233.698: 96.8750% ( 7) 00:07:57.708 10233.698 - 10284.111: 96.9092% ( 6) 00:07:57.708 10284.111 - 10334.523: 96.9377% ( 5) 00:07:57.708 10334.523 - 10384.935: 96.9662% ( 5) 00:07:57.708 10384.935 - 10435.348: 96.9948% ( 5) 00:07:57.708 10435.348 - 10485.760: 97.0632% ( 12) 00:07:57.708 10485.760 - 10536.172: 97.1658% ( 18) 00:07:57.708 10536.172 - 10586.585: 97.3027% ( 24) 00:07:57.708 10586.585 - 10636.997: 97.4339% ( 23) 00:07:57.708 10636.997 - 10687.409: 97.5251% ( 16) 00:07:57.708 10687.409 - 10737.822: 97.6220% ( 17) 00:07:57.708 10737.822 - 10788.234: 97.7190% ( 17) 00:07:57.708 10788.234 - 10838.646: 97.7931% ( 13) 00:07:57.708 10838.646 - 10889.058: 97.8387% ( 8) 00:07:57.709 10889.058 - 10939.471: 97.8786% ( 7) 00:07:57.709 10939.471 - 10989.883: 97.9243% ( 8) 00:07:57.709 10989.883 - 11040.295: 97.9927% ( 12) 00:07:57.709 11040.295 - 11090.708: 98.0383% ( 8) 00:07:57.709 11090.708 - 11141.120: 98.0896% ( 9) 00:07:57.709 11141.120 - 11191.532: 98.1239% ( 6) 00:07:57.709 11191.532 - 11241.945: 98.1467% ( 4) 00:07:57.709 11241.945 - 11292.357: 98.2037% ( 10) 00:07:57.709 11292.357 - 11342.769: 98.2379% ( 6) 00:07:57.709 11342.769 - 11393.182: 98.2835% ( 8) 00:07:57.709 11393.182 - 11443.594: 98.3234% ( 7) 00:07:57.709 11443.594 - 11494.006: 98.3634% ( 7) 00:07:57.709 11494.006 - 11544.418: 98.4090% ( 8) 00:07:57.709 11544.418 - 11594.831: 98.4660% ( 10) 00:07:57.709 11594.831 - 11645.243: 98.5458% ( 14) 00:07:57.709 11645.243 - 11695.655: 98.6428% ( 17) 00:07:57.709 11695.655 - 11746.068: 98.7226% ( 14) 00:07:57.709 11746.068 - 11796.480: 98.8253% ( 18) 00:07:57.709 11796.480 - 11846.892: 98.8880% ( 11) 00:07:57.709 11846.892 - 11897.305: 98.9051% ( 3) 00:07:57.709 13107.200 - 13208.025: 98.9108% ( 1) 00:07:57.709 13308.849 - 13409.674: 98.9165% ( 1) 00:07:57.709 13409.674 - 13510.498: 98.9621% ( 8) 00:07:57.709 13510.498 - 13611.323: 99.2188% ( 45) 00:07:57.709 13611.323 - 13712.148: 99.2587% ( 7) 00:07:57.709 13712.148 - 13812.972: 99.2701% ( 2) 00:07:57.709 22988.012 - 23088.837: 99.2872% ( 3) 00:07:57.709 23088.837 - 23189.662: 99.3100% ( 4) 00:07:57.709 23189.662 - 23290.486: 99.3328% ( 4) 00:07:57.709 23290.486 - 23391.311: 99.3556% ( 4) 00:07:57.709 23391.311 - 23492.135: 99.3784% ( 4) 00:07:57.709 23492.135 - 23592.960: 99.4012% ( 4) 00:07:57.709 23592.960 - 23693.785: 99.4240% ( 4) 00:07:57.709 23693.785 - 23794.609: 99.4469% ( 4) 00:07:57.709 23794.609 - 23895.434: 99.4697% ( 4) 00:07:57.709 23895.434 - 23996.258: 99.4925% ( 4) 00:07:57.709 23996.258 - 24097.083: 99.5153% ( 4) 00:07:57.709 24097.083 - 24197.908: 99.5381% ( 4) 00:07:57.709 24197.908 - 24298.732: 99.5609% ( 4) 00:07:57.709 24298.732 - 24399.557: 99.5837% ( 4) 00:07:57.709 24399.557 - 24500.382: 99.6065% ( 4) 00:07:57.709 24500.382 - 24601.206: 99.6293% ( 4) 00:07:57.709 24601.206 - 24702.031: 99.6350% ( 1) 00:07:57.709 28634.191 - 28835.840: 99.6864% ( 9) 00:07:57.709 28835.840 - 29037.489: 99.7320% ( 8) 00:07:57.709 29037.489 - 29239.138: 99.7833% ( 9) 00:07:57.709 29239.138 - 29440.788: 99.8289% ( 8) 00:07:57.709 29440.788 - 29642.437: 99.8745% ( 8) 00:07:57.709 29642.437 - 29844.086: 99.9259% ( 9) 00:07:57.709 29844.086 - 30045.735: 99.9715% ( 8) 00:07:57.709 30045.735 - 30247.385: 100.0000% ( 5) 00:07:57.709 00:07:57.709 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:57.709 ============================================================================== 00:07:57.709 Range in us Cumulative IO count 00:07:57.709 5822.622 - 5847.828: 0.0114% ( 2) 00:07:57.709 5898.240 - 5923.446: 0.0171% ( 1) 00:07:57.709 5948.652 - 5973.858: 0.0228% ( 1) 00:07:57.709 5973.858 - 5999.065: 0.0399% ( 3) 00:07:57.709 5999.065 - 6024.271: 0.0456% ( 1) 00:07:57.709 6024.271 - 6049.477: 0.1083% ( 11) 00:07:57.709 6049.477 - 6074.683: 0.1369% ( 5) 00:07:57.709 6074.683 - 6099.889: 0.2053% ( 12) 00:07:57.709 6099.889 - 6125.095: 0.2623% ( 10) 00:07:57.709 6125.095 - 6150.302: 0.4220% ( 28) 00:07:57.709 6150.302 - 6175.508: 0.6045% ( 32) 00:07:57.709 6175.508 - 6200.714: 0.8839% ( 49) 00:07:57.709 6200.714 - 6225.920: 1.2546% ( 65) 00:07:57.709 6225.920 - 6251.126: 1.6708% ( 73) 00:07:57.709 6251.126 - 6276.332: 2.2468% ( 101) 00:07:57.709 6276.332 - 6301.538: 3.0452% ( 140) 00:07:57.709 6301.538 - 6326.745: 3.9576% ( 160) 00:07:57.709 6326.745 - 6351.951: 4.6590% ( 123) 00:07:57.709 6351.951 - 6377.157: 5.5144% ( 150) 00:07:57.709 6377.157 - 6402.363: 6.6492% ( 199) 00:07:57.709 6402.363 - 6427.569: 7.7669% ( 196) 00:07:57.709 6427.569 - 6452.775: 9.3408% ( 276) 00:07:57.709 6452.775 - 6503.188: 13.2413% ( 684) 00:07:57.709 6503.188 - 6553.600: 16.9537% ( 651) 00:07:57.709 6553.600 - 6604.012: 22.0005% ( 885) 00:07:57.709 6604.012 - 6654.425: 26.9845% ( 874) 00:07:57.709 6654.425 - 6704.837: 32.5559% ( 977) 00:07:57.709 6704.837 - 6755.249: 38.4694% ( 1037) 00:07:57.709 6755.249 - 6805.662: 43.7101% ( 919) 00:07:57.709 6805.662 - 6856.074: 49.3499% ( 989) 00:07:57.709 6856.074 - 6906.486: 54.2313% ( 856) 00:07:57.709 6906.486 - 6956.898: 59.1127% ( 856) 00:07:57.709 6956.898 - 7007.311: 63.8914% ( 838) 00:07:57.709 7007.311 - 7057.723: 67.5468% ( 641) 00:07:57.709 7057.723 - 7108.135: 70.8428% ( 578) 00:07:57.709 7108.135 - 7158.548: 72.7418% ( 333) 00:07:57.709 7158.548 - 7208.960: 74.5780% ( 322) 00:07:57.709 7208.960 - 7259.372: 76.5853% ( 352) 00:07:57.709 7259.372 - 7309.785: 77.8114% ( 215) 00:07:57.709 7309.785 - 7360.197: 79.2085% ( 245) 00:07:57.709 7360.197 - 7410.609: 80.1095% ( 158) 00:07:57.709 7410.609 - 7461.022: 81.0732% ( 169) 00:07:57.709 7461.022 - 7511.434: 82.0769% ( 176) 00:07:57.709 7511.434 - 7561.846: 82.8752% ( 140) 00:07:57.709 7561.846 - 7612.258: 83.3371% ( 81) 00:07:57.709 7612.258 - 7662.671: 83.9530% ( 108) 00:07:57.709 7662.671 - 7713.083: 84.7286% ( 136) 00:07:57.709 7713.083 - 7763.495: 85.0878% ( 63) 00:07:57.709 7763.495 - 7813.908: 85.4984% ( 72) 00:07:57.709 7813.908 - 7864.320: 85.8177% ( 56) 00:07:57.709 7864.320 - 7914.732: 86.0687% ( 44) 00:07:57.709 7914.732 - 7965.145: 86.3082% ( 42) 00:07:57.709 7965.145 - 8015.557: 86.5990% ( 51) 00:07:57.709 8015.557 - 8065.969: 86.7302% ( 23) 00:07:57.709 8065.969 - 8116.382: 86.9354% ( 36) 00:07:57.709 8116.382 - 8166.794: 87.3232% ( 68) 00:07:57.709 8166.794 - 8217.206: 87.7794% ( 80) 00:07:57.709 8217.206 - 8267.618: 88.1387% ( 63) 00:07:57.709 8267.618 - 8318.031: 88.6120% ( 83) 00:07:57.709 8318.031 - 8368.443: 89.0112% ( 70) 00:07:57.709 8368.443 - 8418.855: 89.4161% ( 71) 00:07:57.709 8418.855 - 8469.268: 89.9122% ( 87) 00:07:57.709 8469.268 - 8519.680: 90.3513% ( 77) 00:07:57.709 8519.680 - 8570.092: 90.7505% ( 70) 00:07:57.709 8570.092 - 8620.505: 91.0071% ( 45) 00:07:57.709 8620.505 - 8670.917: 91.2295% ( 39) 00:07:57.709 8670.917 - 8721.329: 91.6686% ( 77) 00:07:57.709 8721.329 - 8771.742: 91.8853% ( 38) 00:07:57.709 8771.742 - 8822.154: 92.1875% ( 53) 00:07:57.709 8822.154 - 8872.566: 92.4954% ( 54) 00:07:57.709 8872.566 - 8922.978: 92.9973% ( 88) 00:07:57.709 8922.978 - 8973.391: 93.1569% ( 28) 00:07:57.709 8973.391 - 9023.803: 93.3394% ( 32) 00:07:57.709 9023.803 - 9074.215: 93.6017% ( 46) 00:07:57.709 9074.215 - 9124.628: 93.9781% ( 66) 00:07:57.709 9124.628 - 9175.040: 94.1948% ( 38) 00:07:57.709 9175.040 - 9225.452: 94.3317% ( 24) 00:07:57.709 9225.452 - 9275.865: 94.4742% ( 25) 00:07:57.709 9275.865 - 9326.277: 94.6396% ( 29) 00:07:57.709 9326.277 - 9376.689: 94.7651% ( 22) 00:07:57.709 9376.689 - 9427.102: 94.9532% ( 33) 00:07:57.709 9427.102 - 9477.514: 95.1927% ( 42) 00:07:57.709 9477.514 - 9527.926: 95.3752% ( 32) 00:07:57.709 9527.926 - 9578.338: 95.5577% ( 32) 00:07:57.709 9578.338 - 9628.751: 95.7345% ( 31) 00:07:57.709 9628.751 - 9679.163: 95.8885% ( 27) 00:07:57.709 9679.163 - 9729.575: 96.2078% ( 56) 00:07:57.709 9729.575 - 9779.988: 96.3846% ( 31) 00:07:57.709 9779.988 - 9830.400: 96.5443% ( 28) 00:07:57.709 9830.400 - 9880.812: 96.6583% ( 20) 00:07:57.709 9880.812 - 9931.225: 96.8750% ( 38) 00:07:57.709 9931.225 - 9981.637: 96.9491% ( 13) 00:07:57.709 9981.637 - 10032.049: 96.9948% ( 8) 00:07:57.709 10032.049 - 10082.462: 97.0575% ( 11) 00:07:57.709 10082.462 - 10132.874: 97.1772% ( 21) 00:07:57.710 10132.874 - 10183.286: 97.2742% ( 17) 00:07:57.710 10183.286 - 10233.698: 97.3597% ( 15) 00:07:57.710 10233.698 - 10284.111: 97.4167% ( 10) 00:07:57.710 10284.111 - 10334.523: 97.4738% ( 10) 00:07:57.710 10334.523 - 10384.935: 97.5194% ( 8) 00:07:57.710 10384.935 - 10435.348: 97.5536% ( 6) 00:07:57.710 10435.348 - 10485.760: 97.5707% ( 3) 00:07:57.710 10485.760 - 10536.172: 97.5992% ( 5) 00:07:57.710 10536.172 - 10586.585: 97.6391% ( 7) 00:07:57.710 10586.585 - 10636.997: 97.6620% ( 4) 00:07:57.710 10636.997 - 10687.409: 97.6848% ( 4) 00:07:57.710 10687.409 - 10737.822: 97.7076% ( 4) 00:07:57.710 10737.822 - 10788.234: 97.7418% ( 6) 00:07:57.710 10788.234 - 10838.646: 97.7703% ( 5) 00:07:57.710 10838.646 - 10889.058: 97.7817% ( 2) 00:07:57.710 10889.058 - 10939.471: 97.8102% ( 5) 00:07:57.710 10939.471 - 10989.883: 97.8330% ( 4) 00:07:57.710 10989.883 - 11040.295: 97.8501% ( 3) 00:07:57.710 11040.295 - 11090.708: 97.8786% ( 5) 00:07:57.710 11090.708 - 11141.120: 97.9015% ( 4) 00:07:57.710 11141.120 - 11191.532: 97.9243% ( 4) 00:07:57.710 11191.532 - 11241.945: 97.9414% ( 3) 00:07:57.710 11241.945 - 11292.357: 97.9471% ( 1) 00:07:57.710 11292.357 - 11342.769: 97.9642% ( 3) 00:07:57.710 11342.769 - 11393.182: 97.9813% ( 3) 00:07:57.710 11393.182 - 11443.594: 98.0212% ( 7) 00:07:57.710 11443.594 - 11494.006: 98.0668% ( 8) 00:07:57.710 11494.006 - 11544.418: 98.1524% ( 15) 00:07:57.710 11544.418 - 11594.831: 98.3120% ( 28) 00:07:57.710 11594.831 - 11645.243: 98.4261% ( 20) 00:07:57.710 11645.243 - 11695.655: 98.4603% ( 6) 00:07:57.710 11695.655 - 11746.068: 98.5002% ( 7) 00:07:57.710 11746.068 - 11796.480: 98.5401% ( 7) 00:07:57.710 11796.480 - 11846.892: 98.5801% ( 7) 00:07:57.710 11846.892 - 11897.305: 98.6200% ( 7) 00:07:57.710 11897.305 - 11947.717: 98.6656% ( 8) 00:07:57.710 11947.717 - 11998.129: 98.6884% ( 4) 00:07:57.710 11998.129 - 12048.542: 98.7340% ( 8) 00:07:57.710 12048.542 - 12098.954: 98.7911% ( 10) 00:07:57.710 12098.954 - 12149.366: 98.8424% ( 9) 00:07:57.710 12149.366 - 12199.778: 98.8538% ( 2) 00:07:57.710 12199.778 - 12250.191: 98.8652% ( 2) 00:07:57.710 12250.191 - 12300.603: 98.8823% ( 3) 00:07:57.710 12300.603 - 12351.015: 98.8880% ( 1) 00:07:57.710 12351.015 - 12401.428: 98.9051% ( 3) 00:07:57.710 12855.138 - 12905.551: 98.9222% ( 3) 00:07:57.710 12905.551 - 13006.375: 98.9735% ( 9) 00:07:57.710 13006.375 - 13107.200: 99.0249% ( 9) 00:07:57.710 13107.200 - 13208.025: 99.0705% ( 8) 00:07:57.710 13208.025 - 13308.849: 99.1503% ( 14) 00:07:57.710 13308.849 - 13409.674: 99.1731% ( 4) 00:07:57.710 13409.674 - 13510.498: 99.1902% ( 3) 00:07:57.710 13510.498 - 13611.323: 99.2130% ( 4) 00:07:57.710 13611.323 - 13712.148: 99.2302% ( 3) 00:07:57.710 13712.148 - 13812.972: 99.2644% ( 6) 00:07:57.710 13812.972 - 13913.797: 99.2701% ( 1) 00:07:57.710 21374.818 - 21475.643: 99.2815% ( 2) 00:07:57.710 21475.643 - 21576.468: 99.3043% ( 4) 00:07:57.710 21576.468 - 21677.292: 99.3271% ( 4) 00:07:57.710 21677.292 - 21778.117: 99.3499% ( 4) 00:07:57.710 21778.117 - 21878.942: 99.3670% ( 3) 00:07:57.710 21878.942 - 21979.766: 99.3898% ( 4) 00:07:57.710 21979.766 - 22080.591: 99.4126% ( 4) 00:07:57.710 22080.591 - 22181.415: 99.4354% ( 4) 00:07:57.710 22181.415 - 22282.240: 99.4526% ( 3) 00:07:57.710 22282.240 - 22383.065: 99.4754% ( 4) 00:07:57.710 22383.065 - 22483.889: 99.4982% ( 4) 00:07:57.710 22483.889 - 22584.714: 99.5210% ( 4) 00:07:57.710 22584.714 - 22685.538: 99.5438% ( 4) 00:07:57.710 22685.538 - 22786.363: 99.5666% ( 4) 00:07:57.710 22786.363 - 22887.188: 99.5894% ( 4) 00:07:57.710 22887.188 - 22988.012: 99.6122% ( 4) 00:07:57.710 22988.012 - 23088.837: 99.6350% ( 4) 00:07:57.710 27020.997 - 27222.646: 99.6464% ( 2) 00:07:57.710 27222.646 - 27424.295: 99.6921% ( 8) 00:07:57.710 27424.295 - 27625.945: 99.7320% ( 7) 00:07:57.710 27625.945 - 27827.594: 99.7833% ( 9) 00:07:57.710 27827.594 - 28029.243: 99.8289% ( 8) 00:07:57.710 28029.243 - 28230.892: 99.8745% ( 8) 00:07:57.710 28230.892 - 28432.542: 99.9259% ( 9) 00:07:57.710 28432.542 - 28634.191: 99.9715% ( 8) 00:07:57.710 28634.191 - 28835.840: 100.0000% ( 5) 00:07:57.710 00:07:57.710 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:57.710 ============================================================================== 00:07:57.710 Range in us Cumulative IO count 00:07:57.710 5822.622 - 5847.828: 0.0057% ( 1) 00:07:57.710 5923.446 - 5948.652: 0.0114% ( 1) 00:07:57.710 5948.652 - 5973.858: 0.0227% ( 2) 00:07:57.710 5973.858 - 5999.065: 0.0284% ( 1) 00:07:57.710 5999.065 - 6024.271: 0.0511% ( 4) 00:07:57.710 6024.271 - 6049.477: 0.0795% ( 5) 00:07:57.710 6049.477 - 6074.683: 0.1250% ( 8) 00:07:57.710 6074.683 - 6099.889: 0.2159% ( 16) 00:07:57.710 6099.889 - 6125.095: 0.3807% ( 29) 00:07:57.710 6125.095 - 6150.302: 0.5739% ( 34) 00:07:57.710 6150.302 - 6175.508: 0.7841% ( 37) 00:07:57.710 6175.508 - 6200.714: 1.0341% ( 44) 00:07:57.710 6200.714 - 6225.920: 1.3409% ( 54) 00:07:57.710 6225.920 - 6251.126: 1.8466% ( 89) 00:07:57.710 6251.126 - 6276.332: 2.3125% ( 82) 00:07:57.710 6276.332 - 6301.538: 3.1420% ( 146) 00:07:57.710 6301.538 - 6326.745: 4.0114% ( 153) 00:07:57.710 6326.745 - 6351.951: 4.8864% ( 154) 00:07:57.710 6351.951 - 6377.157: 5.5455% ( 116) 00:07:57.710 6377.157 - 6402.363: 6.5795% ( 182) 00:07:57.710 6402.363 - 6427.569: 8.3068% ( 304) 00:07:57.710 6427.569 - 6452.775: 9.8466% ( 271) 00:07:57.710 6452.775 - 6503.188: 13.2216% ( 594) 00:07:57.710 6503.188 - 6553.600: 16.8068% ( 631) 00:07:57.710 6553.600 - 6604.012: 21.9943% ( 913) 00:07:57.710 6604.012 - 6654.425: 26.7670% ( 840) 00:07:57.710 6654.425 - 6704.837: 31.3011% ( 798) 00:07:57.710 6704.837 - 6755.249: 37.0284% ( 1008) 00:07:57.710 6755.249 - 6805.662: 42.7159% ( 1001) 00:07:57.710 6805.662 - 6856.074: 48.5227% ( 1022) 00:07:57.710 6856.074 - 6906.486: 53.9205% ( 950) 00:07:57.710 6906.486 - 6956.898: 58.8352% ( 865) 00:07:57.710 6956.898 - 7007.311: 63.0455% ( 741) 00:07:57.710 7007.311 - 7057.723: 67.2841% ( 746) 00:07:57.710 7057.723 - 7108.135: 70.4205% ( 552) 00:07:57.710 7108.135 - 7158.548: 73.1875% ( 487) 00:07:57.710 7158.548 - 7208.960: 75.1080% ( 338) 00:07:57.710 7208.960 - 7259.372: 76.5398% ( 252) 00:07:57.710 7259.372 - 7309.785: 77.8920% ( 238) 00:07:57.710 7309.785 - 7360.197: 78.8523% ( 169) 00:07:57.710 7360.197 - 7410.609: 79.8068% ( 168) 00:07:57.710 7410.609 - 7461.022: 80.3977% ( 104) 00:07:57.710 7461.022 - 7511.434: 81.2784% ( 155) 00:07:57.710 7511.434 - 7561.846: 82.2955% ( 179) 00:07:57.710 7561.846 - 7612.258: 82.8011% ( 89) 00:07:57.710 7612.258 - 7662.671: 83.2045% ( 71) 00:07:57.710 7662.671 - 7713.083: 83.7784% ( 101) 00:07:57.710 7713.083 - 7763.495: 84.3580% ( 102) 00:07:57.710 7763.495 - 7813.908: 84.8352% ( 84) 00:07:57.710 7813.908 - 7864.320: 85.5852% ( 132) 00:07:57.710 7864.320 - 7914.732: 86.0852% ( 88) 00:07:57.710 7914.732 - 7965.145: 86.5057% ( 74) 00:07:57.710 7965.145 - 8015.557: 86.8693% ( 64) 00:07:57.710 8015.557 - 8065.969: 87.2614% ( 69) 00:07:57.710 8065.969 - 8116.382: 87.4830% ( 39) 00:07:57.710 8116.382 - 8166.794: 87.7784% ( 52) 00:07:57.710 8166.794 - 8217.206: 88.0284% ( 44) 00:07:57.710 8217.206 - 8267.618: 88.2159% ( 33) 00:07:57.710 8267.618 - 8318.031: 88.4375% ( 39) 00:07:57.710 8318.031 - 8368.443: 88.6705% ( 41) 00:07:57.710 8368.443 - 8418.855: 88.9034% ( 41) 00:07:57.710 8418.855 - 8469.268: 89.2159% ( 55) 00:07:57.710 8469.268 - 8519.680: 89.7614% ( 96) 00:07:57.710 8519.680 - 8570.092: 90.4375% ( 119) 00:07:57.710 8570.092 - 8620.505: 90.8068% ( 65) 00:07:57.710 8620.505 - 8670.917: 91.1534% ( 61) 00:07:57.710 8670.917 - 8721.329: 91.5852% ( 76) 00:07:57.710 8721.329 - 8771.742: 92.1818% ( 105) 00:07:57.710 8771.742 - 8822.154: 92.4716% ( 51) 00:07:57.710 8822.154 - 8872.566: 92.6875% ( 38) 00:07:57.710 8872.566 - 8922.978: 92.9205% ( 41) 00:07:57.710 8922.978 - 8973.391: 93.1080% ( 33) 00:07:57.710 8973.391 - 9023.803: 93.2443% ( 24) 00:07:57.710 9023.803 - 9074.215: 93.4318% ( 33) 00:07:57.710 9074.215 - 9124.628: 93.5625% ( 23) 00:07:57.710 9124.628 - 9175.040: 93.7500% ( 33) 00:07:57.710 9175.040 - 9225.452: 93.9659% ( 38) 00:07:57.710 9225.452 - 9275.865: 94.1477% ( 32) 00:07:57.710 9275.865 - 9326.277: 94.2955% ( 26) 00:07:57.710 9326.277 - 9376.689: 94.4545% ( 28) 00:07:57.710 9376.689 - 9427.102: 94.5625% ( 19) 00:07:57.710 9427.102 - 9477.514: 94.6648% ( 18) 00:07:57.710 9477.514 - 9527.926: 94.7898% ( 22) 00:07:57.710 9527.926 - 9578.338: 94.8864% ( 17) 00:07:57.711 9578.338 - 9628.751: 95.0114% ( 22) 00:07:57.711 9628.751 - 9679.163: 95.3807% ( 65) 00:07:57.711 9679.163 - 9729.575: 95.7045% ( 57) 00:07:57.711 9729.575 - 9779.988: 95.9034% ( 35) 00:07:57.711 9779.988 - 9830.400: 96.0398% ( 24) 00:07:57.711 9830.400 - 9880.812: 96.1705% ( 23) 00:07:57.711 9880.812 - 9931.225: 96.3466% ( 31) 00:07:57.711 9931.225 - 9981.637: 96.6080% ( 46) 00:07:57.711 9981.637 - 10032.049: 96.8807% ( 48) 00:07:57.711 10032.049 - 10082.462: 97.1023% ( 39) 00:07:57.711 10082.462 - 10132.874: 97.3011% ( 35) 00:07:57.711 10132.874 - 10183.286: 97.4545% ( 27) 00:07:57.711 10183.286 - 10233.698: 97.5739% ( 21) 00:07:57.711 10233.698 - 10284.111: 97.6591% ( 15) 00:07:57.711 10284.111 - 10334.523: 97.7216% ( 11) 00:07:57.711 10334.523 - 10384.935: 97.7500% ( 5) 00:07:57.711 10384.935 - 10435.348: 97.7670% ( 3) 00:07:57.711 10435.348 - 10485.760: 97.7784% ( 2) 00:07:57.711 10485.760 - 10536.172: 97.7898% ( 2) 00:07:57.711 10536.172 - 10586.585: 97.7955% ( 1) 00:07:57.711 10586.585 - 10636.997: 97.8125% ( 3) 00:07:57.711 10636.997 - 10687.409: 97.8182% ( 1) 00:07:57.711 10788.234 - 10838.646: 97.8239% ( 1) 00:07:57.711 11191.532 - 11241.945: 97.8295% ( 1) 00:07:57.711 11241.945 - 11292.357: 97.8807% ( 9) 00:07:57.711 11292.357 - 11342.769: 97.9375% ( 10) 00:07:57.711 11342.769 - 11393.182: 98.0114% ( 13) 00:07:57.711 11393.182 - 11443.594: 98.0909% ( 14) 00:07:57.711 11443.594 - 11494.006: 98.1705% ( 14) 00:07:57.711 11494.006 - 11544.418: 98.2216% ( 9) 00:07:57.711 11544.418 - 11594.831: 98.2443% ( 4) 00:07:57.711 11594.831 - 11645.243: 98.2784% ( 6) 00:07:57.711 11645.243 - 11695.655: 98.3295% ( 9) 00:07:57.711 11695.655 - 11746.068: 98.3977% ( 12) 00:07:57.711 11746.068 - 11796.480: 98.4375% ( 7) 00:07:57.711 11796.480 - 11846.892: 98.4602% ( 4) 00:07:57.711 11846.892 - 11897.305: 98.4830% ( 4) 00:07:57.711 11897.305 - 11947.717: 98.5000% ( 3) 00:07:57.711 11947.717 - 11998.129: 98.5227% ( 4) 00:07:57.711 11998.129 - 12048.542: 98.5398% ( 3) 00:07:57.711 12048.542 - 12098.954: 98.5852% ( 8) 00:07:57.711 12098.954 - 12149.366: 98.6250% ( 7) 00:07:57.711 12149.366 - 12199.778: 98.6648% ( 7) 00:07:57.711 12199.778 - 12250.191: 98.6989% ( 6) 00:07:57.711 12250.191 - 12300.603: 98.7273% ( 5) 00:07:57.711 12300.603 - 12351.015: 98.8125% ( 15) 00:07:57.711 12351.015 - 12401.428: 98.8977% ( 15) 00:07:57.711 12401.428 - 12451.840: 98.9489% ( 9) 00:07:57.711 12451.840 - 12502.252: 99.0057% ( 10) 00:07:57.711 12502.252 - 12552.665: 99.0455% ( 7) 00:07:57.711 12552.665 - 12603.077: 99.0909% ( 8) 00:07:57.711 12603.077 - 12653.489: 99.1250% ( 6) 00:07:57.711 12653.489 - 12703.902: 99.1648% ( 7) 00:07:57.711 12703.902 - 12754.314: 99.1761% ( 2) 00:07:57.711 12754.314 - 12804.726: 99.1989% ( 4) 00:07:57.711 12804.726 - 12855.138: 99.2102% ( 2) 00:07:57.711 12855.138 - 12905.551: 99.2216% ( 2) 00:07:57.711 12905.551 - 13006.375: 99.2386% ( 3) 00:07:57.711 13006.375 - 13107.200: 99.2500% ( 2) 00:07:57.711 13107.200 - 13208.025: 99.2670% ( 3) 00:07:57.711 13208.025 - 13308.849: 99.2727% ( 1) 00:07:57.711 15224.517 - 15325.342: 99.2841% ( 2) 00:07:57.711 15325.342 - 15426.166: 99.3068% ( 4) 00:07:57.711 15426.166 - 15526.991: 99.3295% ( 4) 00:07:57.711 15526.991 - 15627.815: 99.3580% ( 5) 00:07:57.711 15627.815 - 15728.640: 99.3807% ( 4) 00:07:57.711 15728.640 - 15829.465: 99.4034% ( 4) 00:07:57.711 15829.465 - 15930.289: 99.4261% ( 4) 00:07:57.711 15930.289 - 16031.114: 99.4432% ( 3) 00:07:57.711 16031.114 - 16131.938: 99.4659% ( 4) 00:07:57.711 16131.938 - 16232.763: 99.4943% ( 5) 00:07:57.711 16232.763 - 16333.588: 99.5170% ( 4) 00:07:57.711 16333.588 - 16434.412: 99.5398% ( 4) 00:07:57.711 16434.412 - 16535.237: 99.5682% ( 5) 00:07:57.711 16535.237 - 16636.062: 99.5909% ( 4) 00:07:57.711 16636.062 - 16736.886: 99.6136% ( 4) 00:07:57.711 16736.886 - 16837.711: 99.6364% ( 4) 00:07:57.711 20870.695 - 20971.520: 99.6477% ( 2) 00:07:57.711 20971.520 - 21072.345: 99.6705% ( 4) 00:07:57.711 21072.345 - 21173.169: 99.6932% ( 4) 00:07:57.711 21173.169 - 21273.994: 99.7216% ( 5) 00:07:57.711 21273.994 - 21374.818: 99.7443% ( 4) 00:07:57.711 21374.818 - 21475.643: 99.7670% ( 4) 00:07:57.711 21475.643 - 21576.468: 99.7898% ( 4) 00:07:57.711 21576.468 - 21677.292: 99.8125% ( 4) 00:07:57.711 21677.292 - 21778.117: 99.8352% ( 4) 00:07:57.711 21778.117 - 21878.942: 99.8580% ( 4) 00:07:57.711 21878.942 - 21979.766: 99.8807% ( 4) 00:07:57.711 21979.766 - 22080.591: 99.9034% ( 4) 00:07:57.711 22080.591 - 22181.415: 99.9261% ( 4) 00:07:57.711 22181.415 - 22282.240: 99.9489% ( 4) 00:07:57.711 22282.240 - 22383.065: 99.9716% ( 4) 00:07:57.711 22383.065 - 22483.889: 99.9943% ( 4) 00:07:57.711 22483.889 - 22584.714: 100.0000% ( 1) 00:07:57.711 00:07:57.968 ************************************ 00:07:57.968 END TEST nvme_perf 00:07:57.968 ************************************ 00:07:57.968 19:51:42 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:57.968 00:07:57.968 real 0m2.483s 00:07:57.968 user 0m2.200s 00:07:57.968 sys 0m0.182s 00:07:57.968 19:51:42 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.968 19:51:42 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.968 19:51:42 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:57.968 19:51:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:57.968 19:51:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.968 19:51:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.968 ************************************ 00:07:57.968 START TEST nvme_hello_world 00:07:57.968 ************************************ 00:07:57.968 19:51:42 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:57.968 Initializing NVMe Controllers 00:07:57.968 Attached to 0000:00:10.0 00:07:57.968 Namespace ID: 1 size: 6GB 00:07:57.968 Attached to 0000:00:11.0 00:07:57.968 Namespace ID: 1 size: 5GB 00:07:57.968 Attached to 0000:00:13.0 00:07:57.968 Namespace ID: 1 size: 1GB 00:07:57.968 Attached to 0000:00:12.0 00:07:57.968 Namespace ID: 1 size: 4GB 00:07:57.968 Namespace ID: 2 size: 4GB 00:07:57.968 Namespace ID: 3 size: 4GB 00:07:57.968 Initialization complete. 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 INFO: using host memory buffer for IO 00:07:57.968 Hello world! 00:07:57.968 ************************************ 00:07:57.968 END TEST nvme_hello_world 00:07:57.968 ************************************ 00:07:57.968 00:07:57.968 real 0m0.208s 00:07:57.968 user 0m0.074s 00:07:57.968 sys 0m0.090s 00:07:57.968 19:51:42 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.968 19:51:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:58.225 19:51:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:58.225 19:51:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.225 19:51:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.225 19:51:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.225 ************************************ 00:07:58.225 START TEST nvme_sgl 00:07:58.225 ************************************ 00:07:58.225 19:51:42 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:58.225 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:58.225 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:58.225 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:58.225 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:58.225 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:58.225 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:58.225 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:58.225 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:58.225 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:58.483 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:58.483 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:58.483 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:58.483 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:58.483 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:58.483 NVMe Readv/Writev Request test 00:07:58.483 Attached to 0000:00:10.0 00:07:58.483 Attached to 0000:00:11.0 00:07:58.483 Attached to 0000:00:13.0 00:07:58.483 Attached to 0000:00:12.0 00:07:58.483 0000:00:10.0: build_io_request_2 test passed 00:07:58.483 0000:00:10.0: build_io_request_4 test passed 00:07:58.483 0000:00:10.0: build_io_request_5 test passed 00:07:58.483 0000:00:10.0: build_io_request_6 test passed 00:07:58.483 0000:00:10.0: build_io_request_7 test passed 00:07:58.483 0000:00:10.0: build_io_request_10 test passed 00:07:58.483 0000:00:11.0: build_io_request_2 test passed 00:07:58.483 0000:00:11.0: build_io_request_4 test passed 00:07:58.483 0000:00:11.0: build_io_request_5 test passed 00:07:58.483 0000:00:11.0: build_io_request_6 test passed 00:07:58.483 0000:00:11.0: build_io_request_7 test passed 00:07:58.483 0000:00:11.0: build_io_request_10 test passed 00:07:58.483 Cleaning up... 00:07:58.483 ************************************ 00:07:58.483 END TEST nvme_sgl 00:07:58.483 ************************************ 00:07:58.483 00:07:58.483 real 0m0.273s 00:07:58.483 user 0m0.134s 00:07:58.483 sys 0m0.097s 00:07:58.483 19:51:42 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.483 19:51:42 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:58.483 19:51:42 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:58.483 19:51:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.483 19:51:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.483 19:51:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.483 ************************************ 00:07:58.483 START TEST nvme_e2edp 00:07:58.483 ************************************ 00:07:58.483 19:51:42 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:58.741 NVMe Write/Read with End-to-End data protection test 00:07:58.741 Attached to 0000:00:10.0 00:07:58.741 Attached to 0000:00:11.0 00:07:58.741 Attached to 0000:00:13.0 00:07:58.741 Attached to 0000:00:12.0 00:07:58.741 Cleaning up... 00:07:58.741 ************************************ 00:07:58.741 END TEST nvme_e2edp 00:07:58.741 ************************************ 00:07:58.741 00:07:58.741 real 0m0.194s 00:07:58.741 user 0m0.061s 00:07:58.741 sys 0m0.090s 00:07:58.741 19:51:42 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.741 19:51:42 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 19:51:42 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:58.741 19:51:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.741 19:51:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.741 19:51:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.741 ************************************ 00:07:58.741 START TEST nvme_reserve 00:07:58.741 ************************************ 00:07:58.741 19:51:42 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:58.741 ===================================================== 00:07:58.741 NVMe Controller at PCI bus 0, device 16, function 0 00:07:58.741 ===================================================== 00:07:58.741 Reservations: Not Supported 00:07:58.741 ===================================================== 00:07:58.741 NVMe Controller at PCI bus 0, device 17, function 0 00:07:58.741 ===================================================== 00:07:58.741 Reservations: Not Supported 00:07:58.741 ===================================================== 00:07:58.741 NVMe Controller at PCI bus 0, device 19, function 0 00:07:58.741 ===================================================== 00:07:58.741 Reservations: Not Supported 00:07:58.741 ===================================================== 00:07:58.741 NVMe Controller at PCI bus 0, device 18, function 0 00:07:58.741 ===================================================== 00:07:58.741 Reservations: Not Supported 00:07:58.741 Reservation test passed 00:07:58.741 ************************************ 00:07:58.741 END TEST nvme_reserve 00:07:58.741 ************************************ 00:07:58.741 00:07:58.741 real 0m0.202s 00:07:58.741 user 0m0.059s 00:07:58.741 sys 0m0.096s 00:07:58.741 19:51:43 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.741 19:51:43 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 19:51:43 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:58.999 19:51:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.999 19:51:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.999 19:51:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.999 ************************************ 00:07:58.999 START TEST nvme_err_injection 00:07:58.999 ************************************ 00:07:58.999 19:51:43 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:58.999 NVMe Error Injection test 00:07:58.999 Attached to 0000:00:10.0 00:07:58.999 Attached to 0000:00:11.0 00:07:58.999 Attached to 0000:00:13.0 00:07:58.999 Attached to 0000:00:12.0 00:07:58.999 0000:00:10.0: get features failed as expected 00:07:58.999 0000:00:11.0: get features failed as expected 00:07:58.999 0000:00:13.0: get features failed as expected 00:07:58.999 0000:00:12.0: get features failed as expected 00:07:58.999 0000:00:10.0: get features successfully as expected 00:07:58.999 0000:00:11.0: get features successfully as expected 00:07:58.999 0000:00:13.0: get features successfully as expected 00:07:58.999 0000:00:12.0: get features successfully as expected 00:07:58.999 0000:00:13.0: read failed as expected 00:07:58.999 0000:00:10.0: read failed as expected 00:07:58.999 0000:00:11.0: read failed as expected 00:07:58.999 0000:00:12.0: read failed as expected 00:07:58.999 0000:00:10.0: read successfully as expected 00:07:58.999 0000:00:11.0: read successfully as expected 00:07:58.999 0000:00:13.0: read successfully as expected 00:07:58.999 0000:00:12.0: read successfully as expected 00:07:58.999 Cleaning up... 00:07:58.999 00:07:58.999 real 0m0.209s 00:07:58.999 user 0m0.079s 00:07:58.999 sys 0m0.089s 00:07:58.999 19:51:43 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.999 ************************************ 00:07:58.999 END TEST nvme_err_injection 00:07:58.999 ************************************ 00:07:58.999 19:51:43 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:59.257 19:51:43 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:59.257 19:51:43 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:59.257 19:51:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.257 19:51:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.257 ************************************ 00:07:59.257 START TEST nvme_overhead 00:07:59.257 ************************************ 00:07:59.257 19:51:43 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:00.628 Initializing NVMe Controllers 00:08:00.628 Attached to 0000:00:10.0 00:08:00.628 Attached to 0000:00:11.0 00:08:00.628 Attached to 0000:00:13.0 00:08:00.628 Attached to 0000:00:12.0 00:08:00.628 Initialization complete. Launching workers. 00:08:00.628 submit (in ns) avg, min, max = 11292.3, 9670.0, 46313.8 00:08:00.628 complete (in ns) avg, min, max = 7631.9, 7191.5, 281653.1 00:08:00.628 00:08:00.628 Submit histogram 00:08:00.628 ================ 00:08:00.628 Range in us Cumulative Count 00:08:00.628 9.649 - 9.698: 0.0057% ( 1) 00:08:00.628 10.142 - 10.191: 0.0114% ( 1) 00:08:00.628 10.191 - 10.240: 0.0228% ( 2) 00:08:00.628 10.683 - 10.732: 0.0284% ( 1) 00:08:00.628 10.732 - 10.782: 0.0455% ( 3) 00:08:00.628 10.782 - 10.831: 0.5917% ( 96) 00:08:00.628 10.831 - 10.880: 4.3585% ( 662) 00:08:00.628 10.880 - 10.929: 17.1778% ( 2253) 00:08:00.628 10.929 - 10.978: 35.9090% ( 3292) 00:08:00.628 10.978 - 11.028: 53.9573% ( 3172) 00:08:00.628 11.028 - 11.077: 65.5363% ( 2035) 00:08:00.628 11.077 - 11.126: 71.8691% ( 1113) 00:08:00.628 11.126 - 11.175: 75.4367% ( 627) 00:08:00.628 11.175 - 11.225: 77.1323% ( 298) 00:08:00.628 11.225 - 11.274: 78.4751% ( 236) 00:08:00.628 11.274 - 11.323: 79.6131% ( 200) 00:08:00.628 11.323 - 11.372: 81.1152% ( 264) 00:08:00.628 11.372 - 11.422: 82.9189% ( 317) 00:08:00.628 11.422 - 11.471: 84.6657% ( 307) 00:08:00.628 11.471 - 11.520: 85.9289% ( 222) 00:08:00.628 11.520 - 11.569: 86.9075% ( 172) 00:08:00.628 11.569 - 11.618: 87.6302% ( 127) 00:08:00.628 11.618 - 11.668: 88.2219% ( 104) 00:08:00.628 11.668 - 11.717: 88.7055% ( 85) 00:08:00.628 11.717 - 11.766: 89.3087% ( 106) 00:08:00.628 11.766 - 11.815: 89.9915% ( 120) 00:08:00.628 11.815 - 11.865: 90.6515% ( 116) 00:08:00.628 11.865 - 11.914: 91.3058% ( 115) 00:08:00.628 11.914 - 11.963: 92.1309% ( 145) 00:08:00.628 11.963 - 12.012: 92.7283% ( 105) 00:08:00.628 12.012 - 12.062: 93.6159% ( 156) 00:08:00.628 12.062 - 12.111: 94.3670% ( 132) 00:08:00.628 12.111 - 12.160: 94.9018% ( 94) 00:08:00.628 12.160 - 12.209: 95.3912% ( 86) 00:08:00.628 12.209 - 12.258: 95.7895% ( 70) 00:08:00.628 12.258 - 12.308: 96.0626% ( 48) 00:08:00.628 12.308 - 12.357: 96.3243% ( 46) 00:08:00.628 12.357 - 12.406: 96.4552% ( 23) 00:08:00.628 12.406 - 12.455: 96.5462% ( 16) 00:08:00.628 12.455 - 12.505: 96.6600% ( 20) 00:08:00.628 12.505 - 12.554: 96.7226% ( 11) 00:08:00.628 12.554 - 12.603: 96.7795% ( 10) 00:08:00.628 12.603 - 12.702: 96.8421% ( 11) 00:08:00.628 12.702 - 12.800: 96.8933% ( 9) 00:08:00.628 12.800 - 12.898: 96.9275% ( 6) 00:08:00.628 12.898 - 12.997: 96.9957% ( 12) 00:08:00.628 12.997 - 13.095: 97.0526% ( 10) 00:08:00.628 13.095 - 13.194: 97.1721% ( 21) 00:08:00.628 13.194 - 13.292: 97.2916% ( 21) 00:08:00.628 13.292 - 13.391: 97.4054% ( 20) 00:08:00.628 13.391 - 13.489: 97.4794% ( 13) 00:08:00.628 13.489 - 13.588: 97.5932% ( 20) 00:08:00.628 13.588 - 13.686: 97.6899% ( 17) 00:08:00.628 13.686 - 13.785: 97.7297% ( 7) 00:08:00.628 13.785 - 13.883: 97.7923% ( 11) 00:08:00.628 13.883 - 13.982: 97.8435% ( 9) 00:08:00.628 13.982 - 14.080: 97.8834% ( 7) 00:08:00.628 14.080 - 14.178: 97.9289% ( 8) 00:08:00.628 14.178 - 14.277: 97.9630% ( 6) 00:08:00.628 14.277 - 14.375: 97.9915% ( 5) 00:08:00.628 14.375 - 14.474: 98.0256% ( 6) 00:08:00.628 14.474 - 14.572: 98.0597% ( 6) 00:08:00.628 14.572 - 14.671: 98.0882% ( 5) 00:08:00.628 14.671 - 14.769: 98.1166% ( 5) 00:08:00.628 14.769 - 14.868: 98.1337% ( 3) 00:08:00.628 14.868 - 14.966: 98.1679% ( 6) 00:08:00.628 14.966 - 15.065: 98.1906% ( 4) 00:08:00.628 15.065 - 15.163: 98.2134% ( 4) 00:08:00.628 15.163 - 15.262: 98.2589% ( 8) 00:08:00.628 15.262 - 15.360: 98.3101% ( 9) 00:08:00.628 15.360 - 15.458: 98.3556% ( 8) 00:08:00.628 15.458 - 15.557: 98.3727% ( 3) 00:08:00.628 15.557 - 15.655: 98.3841% ( 2) 00:08:00.628 15.655 - 15.754: 98.4068% ( 4) 00:08:00.628 15.852 - 15.951: 98.4125% ( 1) 00:08:00.628 16.049 - 16.148: 98.4239% ( 2) 00:08:00.629 16.148 - 16.246: 98.4296% ( 1) 00:08:00.629 16.246 - 16.345: 98.4410% ( 2) 00:08:00.629 16.345 - 16.443: 98.4694% ( 5) 00:08:00.629 16.443 - 16.542: 98.5092% ( 7) 00:08:00.629 16.542 - 16.640: 98.6003% ( 16) 00:08:00.629 16.640 - 16.738: 98.7027% ( 18) 00:08:00.629 16.738 - 16.837: 98.7596% ( 10) 00:08:00.629 16.837 - 16.935: 98.8222% ( 11) 00:08:00.629 16.935 - 17.034: 98.8962% ( 13) 00:08:00.629 17.034 - 17.132: 98.9701% ( 13) 00:08:00.629 17.132 - 17.231: 99.0441% ( 13) 00:08:00.629 17.231 - 17.329: 99.1238% ( 14) 00:08:00.629 17.329 - 17.428: 99.1807% ( 10) 00:08:00.629 17.428 - 17.526: 99.2091% ( 5) 00:08:00.629 17.526 - 17.625: 99.3001% ( 16) 00:08:00.629 17.625 - 17.723: 99.3570% ( 10) 00:08:00.629 17.723 - 17.822: 99.4310% ( 13) 00:08:00.629 17.822 - 17.920: 99.4765% ( 8) 00:08:00.629 17.920 - 18.018: 99.5505% ( 13) 00:08:00.629 18.018 - 18.117: 99.6017% ( 9) 00:08:00.629 18.117 - 18.215: 99.6472% ( 8) 00:08:00.629 18.215 - 18.314: 99.6814% ( 6) 00:08:00.629 18.314 - 18.412: 99.7098% ( 5) 00:08:00.629 18.412 - 18.511: 99.7155% ( 1) 00:08:00.629 18.511 - 18.609: 99.7269% ( 2) 00:08:00.629 18.609 - 18.708: 99.7326% ( 1) 00:08:00.629 18.708 - 18.806: 99.7496% ( 3) 00:08:00.629 18.806 - 18.905: 99.7610% ( 2) 00:08:00.629 19.003 - 19.102: 99.7667% ( 1) 00:08:00.629 19.102 - 19.200: 99.7724% ( 1) 00:08:00.629 19.397 - 19.495: 99.7781% ( 1) 00:08:00.629 19.495 - 19.594: 99.7895% ( 2) 00:08:00.629 19.692 - 19.791: 99.7952% ( 1) 00:08:00.629 19.791 - 19.889: 99.8009% ( 1) 00:08:00.629 20.283 - 20.382: 99.8065% ( 1) 00:08:00.629 20.972 - 21.071: 99.8122% ( 1) 00:08:00.629 21.071 - 21.169: 99.8179% ( 1) 00:08:00.629 21.465 - 21.563: 99.8293% ( 2) 00:08:00.629 21.563 - 21.662: 99.8350% ( 1) 00:08:00.629 21.662 - 21.760: 99.8407% ( 1) 00:08:00.629 21.760 - 21.858: 99.8521% ( 2) 00:08:00.629 21.858 - 21.957: 99.8634% ( 2) 00:08:00.629 21.957 - 22.055: 99.8805% ( 3) 00:08:00.629 22.154 - 22.252: 99.8862% ( 1) 00:08:00.629 22.449 - 22.548: 99.8919% ( 1) 00:08:00.629 22.646 - 22.745: 99.8976% ( 1) 00:08:00.629 22.942 - 23.040: 99.9033% ( 1) 00:08:00.629 23.040 - 23.138: 99.9090% ( 1) 00:08:00.629 23.138 - 23.237: 99.9203% ( 2) 00:08:00.629 23.729 - 23.828: 99.9260% ( 1) 00:08:00.629 25.009 - 25.108: 99.9317% ( 1) 00:08:00.629 25.600 - 25.797: 99.9374% ( 1) 00:08:00.629 25.994 - 26.191: 99.9431% ( 1) 00:08:00.629 26.191 - 26.388: 99.9488% ( 1) 00:08:00.629 27.175 - 27.372: 99.9545% ( 1) 00:08:00.629 27.963 - 28.160: 99.9602% ( 1) 00:08:00.629 32.689 - 32.886: 99.9716% ( 2) 00:08:00.629 38.597 - 38.794: 99.9772% ( 1) 00:08:00.629 40.369 - 40.566: 99.9829% ( 1) 00:08:00.629 43.323 - 43.520: 99.9886% ( 1) 00:08:00.629 45.489 - 45.686: 99.9943% ( 1) 00:08:00.629 46.277 - 46.474: 100.0000% ( 1) 00:08:00.629 00:08:00.629 Complete histogram 00:08:00.629 ================== 00:08:00.629 Range in us Cumulative Count 00:08:00.629 7.188 - 7.237: 0.1536% ( 27) 00:08:00.629 7.237 - 7.286: 2.2987% ( 377) 00:08:00.629 7.286 - 7.335: 11.8976% ( 1687) 00:08:00.629 7.335 - 7.385: 32.2105% ( 3570) 00:08:00.629 7.385 - 7.434: 55.4765% ( 4089) 00:08:00.629 7.434 - 7.483: 72.2162% ( 2942) 00:08:00.629 7.483 - 7.532: 80.9388% ( 1533) 00:08:00.629 7.532 - 7.582: 85.0640% ( 725) 00:08:00.629 7.582 - 7.631: 86.9929% ( 339) 00:08:00.629 7.631 - 7.680: 88.1650% ( 206) 00:08:00.629 7.680 - 7.729: 88.7738% ( 107) 00:08:00.629 7.729 - 7.778: 89.0242% ( 44) 00:08:00.629 7.778 - 7.828: 89.4737% ( 79) 00:08:00.629 7.828 - 7.877: 90.6799% ( 212) 00:08:00.629 7.877 - 7.926: 91.8293% ( 202) 00:08:00.629 7.926 - 7.975: 92.9388% ( 195) 00:08:00.629 7.975 - 8.025: 93.8094% ( 153) 00:08:00.629 8.025 - 8.074: 94.8051% ( 175) 00:08:00.629 8.074 - 8.123: 95.7440% ( 165) 00:08:00.629 8.123 - 8.172: 96.5576% ( 143) 00:08:00.629 8.172 - 8.222: 97.1038% ( 96) 00:08:00.629 8.222 - 8.271: 97.4111% ( 54) 00:08:00.629 8.271 - 8.320: 97.6728% ( 46) 00:08:00.629 8.320 - 8.369: 97.7696% ( 17) 00:08:00.629 8.369 - 8.418: 97.8720% ( 18) 00:08:00.629 8.418 - 8.468: 97.9346% ( 11) 00:08:00.629 8.468 - 8.517: 97.9972% ( 11) 00:08:00.629 8.517 - 8.566: 98.0199% ( 4) 00:08:00.629 8.566 - 8.615: 98.0427% ( 4) 00:08:00.629 8.615 - 8.665: 98.0541% ( 2) 00:08:00.629 8.665 - 8.714: 98.0654% ( 2) 00:08:00.629 8.763 - 8.812: 98.0768% ( 2) 00:08:00.629 8.812 - 8.862: 98.0939% ( 3) 00:08:00.629 8.862 - 8.911: 98.1053% ( 2) 00:08:00.629 9.009 - 9.058: 98.1110% ( 1) 00:08:00.629 9.157 - 9.206: 98.1166% ( 1) 00:08:00.629 9.305 - 9.354: 98.1223% ( 1) 00:08:00.629 9.354 - 9.403: 98.1337% ( 2) 00:08:00.629 9.403 - 9.452: 98.1394% ( 1) 00:08:00.629 9.452 - 9.502: 98.1451% ( 1) 00:08:00.629 9.551 - 9.600: 98.1565% ( 2) 00:08:00.629 9.600 - 9.649: 98.1679% ( 2) 00:08:00.629 9.649 - 9.698: 98.1735% ( 1) 00:08:00.629 9.698 - 9.748: 98.2020% ( 5) 00:08:00.629 9.748 - 9.797: 98.2077% ( 1) 00:08:00.629 9.797 - 9.846: 98.2248% ( 3) 00:08:00.629 9.846 - 9.895: 98.2418% ( 3) 00:08:00.629 9.895 - 9.945: 98.2475% ( 1) 00:08:00.629 9.994 - 10.043: 98.2532% ( 1) 00:08:00.629 10.043 - 10.092: 98.2589% ( 1) 00:08:00.629 10.142 - 10.191: 98.2646% ( 1) 00:08:00.629 10.191 - 10.240: 98.2760% ( 2) 00:08:00.629 10.289 - 10.338: 98.2987% ( 4) 00:08:00.629 10.338 - 10.388: 98.3101% ( 2) 00:08:00.629 10.388 - 10.437: 98.3158% ( 1) 00:08:00.629 10.535 - 10.585: 98.3215% ( 1) 00:08:00.629 10.634 - 10.683: 98.3272% ( 1) 00:08:00.629 10.683 - 10.732: 98.3385% ( 2) 00:08:00.629 10.732 - 10.782: 98.3556% ( 3) 00:08:00.629 10.782 - 10.831: 98.3613% ( 1) 00:08:00.629 10.831 - 10.880: 98.3670% ( 1) 00:08:00.629 10.929 - 10.978: 98.3727% ( 1) 00:08:00.629 10.978 - 11.028: 98.3784% ( 1) 00:08:00.629 11.077 - 11.126: 98.3898% ( 2) 00:08:00.629 11.126 - 11.175: 98.3954% ( 1) 00:08:00.629 11.175 - 11.225: 98.4011% ( 1) 00:08:00.629 11.225 - 11.274: 98.4068% ( 1) 00:08:00.629 11.274 - 11.323: 98.4125% ( 1) 00:08:00.629 11.372 - 11.422: 98.4182% ( 1) 00:08:00.629 11.471 - 11.520: 98.4239% ( 1) 00:08:00.629 11.618 - 11.668: 98.4296% ( 1) 00:08:00.629 11.766 - 11.815: 98.4353% ( 1) 00:08:00.629 12.062 - 12.111: 98.4410% ( 1) 00:08:00.629 12.111 - 12.160: 98.4467% ( 1) 00:08:00.629 12.258 - 12.308: 98.4523% ( 1) 00:08:00.629 12.603 - 12.702: 98.4580% ( 1) 00:08:00.629 12.702 - 12.800: 98.4694% ( 2) 00:08:00.629 12.800 - 12.898: 98.5036% ( 6) 00:08:00.629 12.898 - 12.997: 98.5548% ( 9) 00:08:00.629 12.997 - 13.095: 98.6230% ( 12) 00:08:00.629 13.095 - 13.194: 98.6799% ( 10) 00:08:00.629 13.194 - 13.292: 98.7255% ( 8) 00:08:00.629 13.292 - 13.391: 98.7824% ( 10) 00:08:00.629 13.391 - 13.489: 98.8677% ( 15) 00:08:00.629 13.489 - 13.588: 99.0043% ( 24) 00:08:00.629 13.588 - 13.686: 99.1238% ( 21) 00:08:00.629 13.686 - 13.785: 99.2148% ( 16) 00:08:00.629 13.785 - 13.883: 99.3001% ( 15) 00:08:00.629 13.883 - 13.982: 99.3798% ( 14) 00:08:00.629 13.982 - 14.080: 99.4424% ( 11) 00:08:00.629 14.080 - 14.178: 99.4765% ( 6) 00:08:00.629 14.178 - 14.277: 99.5220% ( 8) 00:08:00.629 14.277 - 14.375: 99.5960% ( 13) 00:08:00.629 14.375 - 14.474: 99.6188% ( 4) 00:08:00.629 14.474 - 14.572: 99.6529% ( 6) 00:08:00.629 14.572 - 14.671: 99.6871% ( 6) 00:08:00.629 14.671 - 14.769: 99.7098% ( 4) 00:08:00.629 14.769 - 14.868: 99.7383% ( 5) 00:08:00.629 14.868 - 14.966: 99.7440% ( 1) 00:08:00.629 14.966 - 15.065: 99.7667% ( 4) 00:08:00.629 15.065 - 15.163: 99.7781% ( 2) 00:08:00.630 15.360 - 15.458: 99.7895% ( 2) 00:08:00.630 15.458 - 15.557: 99.7952% ( 1) 00:08:00.630 15.655 - 15.754: 99.8009% ( 1) 00:08:00.630 16.246 - 16.345: 99.8065% ( 1) 00:08:00.630 16.443 - 16.542: 99.8122% ( 1) 00:08:00.630 16.542 - 16.640: 99.8179% ( 1) 00:08:00.630 16.837 - 16.935: 99.8236% ( 1) 00:08:00.630 17.231 - 17.329: 99.8407% ( 3) 00:08:00.630 17.329 - 17.428: 99.8521% ( 2) 00:08:00.630 17.428 - 17.526: 99.8634% ( 2) 00:08:00.630 17.822 - 17.920: 99.8691% ( 1) 00:08:00.630 18.117 - 18.215: 99.8748% ( 1) 00:08:00.630 18.314 - 18.412: 99.8805% ( 1) 00:08:00.630 18.708 - 18.806: 99.8862% ( 1) 00:08:00.630 18.806 - 18.905: 99.8919% ( 1) 00:08:00.630 19.200 - 19.298: 99.8976% ( 1) 00:08:00.630 19.298 - 19.397: 99.9033% ( 1) 00:08:00.630 19.397 - 19.495: 99.9090% ( 1) 00:08:00.630 19.988 - 20.086: 99.9147% ( 1) 00:08:00.630 20.775 - 20.874: 99.9203% ( 1) 00:08:00.630 22.745 - 22.843: 99.9317% ( 2) 00:08:00.630 23.237 - 23.335: 99.9374% ( 1) 00:08:00.630 23.434 - 23.532: 99.9431% ( 1) 00:08:00.630 23.828 - 23.926: 99.9488% ( 1) 00:08:00.630 24.123 - 24.222: 99.9545% ( 1) 00:08:00.630 28.948 - 29.145: 99.9602% ( 1) 00:08:00.630 47.065 - 47.262: 99.9659% ( 1) 00:08:00.630 50.215 - 50.412: 99.9716% ( 1) 00:08:00.630 51.594 - 51.988: 99.9772% ( 1) 00:08:00.630 53.563 - 53.957: 99.9829% ( 1) 00:08:00.630 57.108 - 57.502: 99.9886% ( 1) 00:08:00.630 239.458 - 241.034: 99.9943% ( 1) 00:08:00.630 280.418 - 281.994: 100.0000% ( 1) 00:08:00.630 00:08:00.630 ************************************ 00:08:00.630 END TEST nvme_overhead 00:08:00.630 ************************************ 00:08:00.630 00:08:00.630 real 0m1.209s 00:08:00.630 user 0m1.070s 00:08:00.630 sys 0m0.088s 00:08:00.630 19:51:44 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.630 19:51:44 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 19:51:44 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:00.630 19:51:44 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:00.630 19:51:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.630 19:51:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 ************************************ 00:08:00.630 START TEST nvme_arbitration 00:08:00.630 ************************************ 00:08:00.630 19:51:44 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:03.912 Initializing NVMe Controllers 00:08:03.912 Attached to 0000:00:10.0 00:08:03.912 Attached to 0000:00:11.0 00:08:03.912 Attached to 0000:00:13.0 00:08:03.912 Attached to 0000:00:12.0 00:08:03.912 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:03.912 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:03.912 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:03.912 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:03.912 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:03.912 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:03.912 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:03.912 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:03.912 Initialization complete. Launching workers. 00:08:03.912 Starting thread on core 1 with urgent priority queue 00:08:03.912 Starting thread on core 2 with urgent priority queue 00:08:03.912 Starting thread on core 3 with urgent priority queue 00:08:03.912 Starting thread on core 0 with urgent priority queue 00:08:03.912 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:08:03.912 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:08:03.912 QEMU NVMe Ctrl (12341 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:08:03.912 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:08:03.912 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:08:03.912 QEMU NVMe Ctrl (12342 ) core 3: 874.67 IO/s 114.33 secs/100000 ios 00:08:03.912 ======================================================== 00:08:03.912 00:08:03.912 ************************************ 00:08:03.912 END TEST nvme_arbitration 00:08:03.912 ************************************ 00:08:03.912 00:08:03.912 real 0m3.268s 00:08:03.912 user 0m9.166s 00:08:03.912 sys 0m0.109s 00:08:03.912 19:51:47 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.913 19:51:47 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:03.913 19:51:47 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:03.913 19:51:47 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.913 19:51:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.913 19:51:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.913 ************************************ 00:08:03.913 START TEST nvme_single_aen 00:08:03.913 ************************************ 00:08:03.913 19:51:47 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:03.913 Asynchronous Event Request test 00:08:03.913 Attached to 0000:00:10.0 00:08:03.913 Attached to 0000:00:11.0 00:08:03.913 Attached to 0000:00:13.0 00:08:03.913 Attached to 0000:00:12.0 00:08:03.913 Reset controller to setup AER completions for this process 00:08:03.913 Registering asynchronous event callbacks... 00:08:03.913 Getting orig temperature thresholds of all controllers 00:08:03.913 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.913 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.913 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.913 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:03.913 Setting all controllers temperature threshold low to trigger AER 00:08:03.913 Waiting for all controllers temperature threshold to be set lower 00:08:03.913 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.913 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:03.913 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.913 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:03.913 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.913 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:03.913 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:03.913 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:03.913 Waiting for all controllers to trigger AER and reset threshold 00:08:03.913 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.913 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.913 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.913 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.913 Cleaning up... 00:08:03.913 00:08:03.913 real 0m0.199s 00:08:03.913 user 0m0.073s 00:08:03.913 sys 0m0.095s 00:08:03.913 ************************************ 00:08:03.913 END TEST nvme_single_aen 00:08:03.913 ************************************ 00:08:03.913 19:51:48 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.913 19:51:48 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:03.913 19:51:48 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:03.913 19:51:48 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.913 19:51:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.913 19:51:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.913 ************************************ 00:08:03.913 START TEST nvme_doorbell_aers 00:08:03.913 ************************************ 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.913 19:51:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:04.171 [2024-09-30 19:51:48.466428] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:14.165 Executing: test_write_invalid_db 00:08:14.165 Waiting for AER completion... 00:08:14.165 Failure: test_write_invalid_db 00:08:14.165 00:08:14.165 Executing: test_invalid_db_write_overflow_sq 00:08:14.165 Waiting for AER completion... 00:08:14.165 Failure: test_invalid_db_write_overflow_sq 00:08:14.165 00:08:14.165 Executing: test_invalid_db_write_overflow_cq 00:08:14.165 Waiting for AER completion... 00:08:14.165 Failure: test_invalid_db_write_overflow_cq 00:08:14.165 00:08:14.165 19:51:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:14.165 19:51:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:14.165 [2024-09-30 19:51:58.485395] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:24.168 Executing: test_write_invalid_db 00:08:24.168 Waiting for AER completion... 00:08:24.168 Failure: test_write_invalid_db 00:08:24.168 00:08:24.168 Executing: test_invalid_db_write_overflow_sq 00:08:24.168 Waiting for AER completion... 00:08:24.168 Failure: test_invalid_db_write_overflow_sq 00:08:24.168 00:08:24.168 Executing: test_invalid_db_write_overflow_cq 00:08:24.168 Waiting for AER completion... 00:08:24.168 Failure: test_invalid_db_write_overflow_cq 00:08:24.168 00:08:24.168 19:52:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:24.168 19:52:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:24.168 [2024-09-30 19:52:08.532277] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:34.160 Executing: test_write_invalid_db 00:08:34.160 Waiting for AER completion... 00:08:34.160 Failure: test_write_invalid_db 00:08:34.160 00:08:34.160 Executing: test_invalid_db_write_overflow_sq 00:08:34.160 Waiting for AER completion... 00:08:34.160 Failure: test_invalid_db_write_overflow_sq 00:08:34.160 00:08:34.160 Executing: test_invalid_db_write_overflow_cq 00:08:34.160 Waiting for AER completion... 00:08:34.160 Failure: test_invalid_db_write_overflow_cq 00:08:34.160 00:08:34.160 19:52:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:34.160 19:52:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:34.418 [2024-09-30 19:52:18.587846] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 Executing: test_write_invalid_db 00:08:44.447 Waiting for AER completion... 00:08:44.447 Failure: test_write_invalid_db 00:08:44.447 00:08:44.447 Executing: test_invalid_db_write_overflow_sq 00:08:44.447 Waiting for AER completion... 00:08:44.447 Failure: test_invalid_db_write_overflow_sq 00:08:44.447 00:08:44.447 Executing: test_invalid_db_write_overflow_cq 00:08:44.447 Waiting for AER completion... 00:08:44.447 Failure: test_invalid_db_write_overflow_cq 00:08:44.447 00:08:44.447 00:08:44.447 real 0m40.197s 00:08:44.447 user 0m34.196s 00:08:44.447 sys 0m5.630s 00:08:44.447 19:52:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.447 19:52:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:44.447 ************************************ 00:08:44.447 END TEST nvme_doorbell_aers 00:08:44.447 ************************************ 00:08:44.447 19:52:28 nvme -- nvme/nvme.sh@97 -- # uname 00:08:44.447 19:52:28 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:44.447 19:52:28 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:44.447 19:52:28 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:44.447 19:52:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.447 19:52:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.447 ************************************ 00:08:44.447 START TEST nvme_multi_aen 00:08:44.447 ************************************ 00:08:44.447 19:52:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:44.447 [2024-09-30 19:52:28.606978] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.607037] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.607047] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.608442] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.608477] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.608485] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.609358] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.609382] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.609390] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.610238] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.610262] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 [2024-09-30 19:52:28.610281] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63560) is not found. Dropping the request. 00:08:44.447 Child process pid: 64086 00:08:44.447 [Child] Asynchronous Event Request test 00:08:44.447 [Child] Attached to 0000:00:10.0 00:08:44.447 [Child] Attached to 0000:00:11.0 00:08:44.447 [Child] Attached to 0000:00:13.0 00:08:44.447 [Child] Attached to 0000:00:12.0 00:08:44.447 [Child] Registering asynchronous event callbacks... 00:08:44.447 [Child] Getting orig temperature thresholds of all controllers 00:08:44.447 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.447 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.447 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.447 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.447 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:44.447 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.447 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.447 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.447 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.447 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.447 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.447 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.447 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.447 [Child] Cleaning up... 00:08:44.709 Asynchronous Event Request test 00:08:44.709 Attached to 0000:00:10.0 00:08:44.709 Attached to 0000:00:11.0 00:08:44.709 Attached to 0000:00:13.0 00:08:44.709 Attached to 0000:00:12.0 00:08:44.709 Reset controller to setup AER completions for this process 00:08:44.709 Registering asynchronous event callbacks... 00:08:44.709 Getting orig temperature thresholds of all controllers 00:08:44.709 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.709 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.709 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.709 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:44.709 Setting all controllers temperature threshold low to trigger AER 00:08:44.709 Waiting for all controllers temperature threshold to be set lower 00:08:44.709 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.709 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:44.709 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.709 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:44.709 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.709 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:44.709 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:44.709 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:44.709 Waiting for all controllers to trigger AER and reset threshold 00:08:44.709 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.709 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.709 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.709 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:44.709 Cleaning up... 00:08:44.709 00:08:44.709 real 0m0.407s 00:08:44.709 user 0m0.132s 00:08:44.709 sys 0m0.171s 00:08:44.709 19:52:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.709 19:52:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:44.709 ************************************ 00:08:44.709 END TEST nvme_multi_aen 00:08:44.709 ************************************ 00:08:44.709 19:52:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:44.709 19:52:28 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:44.709 19:52:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.709 19:52:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.709 ************************************ 00:08:44.709 START TEST nvme_startup 00:08:44.709 ************************************ 00:08:44.709 19:52:28 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:44.969 Initializing NVMe Controllers 00:08:44.969 Attached to 0000:00:10.0 00:08:44.969 Attached to 0000:00:11.0 00:08:44.969 Attached to 0000:00:13.0 00:08:44.969 Attached to 0000:00:12.0 00:08:44.969 Initialization complete. 00:08:44.969 Time used:131546.828 (us). 00:08:44.969 00:08:44.969 real 0m0.205s 00:08:44.969 user 0m0.049s 00:08:44.969 sys 0m0.108s 00:08:44.969 19:52:29 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.969 ************************************ 00:08:44.969 END TEST nvme_startup 00:08:44.969 ************************************ 00:08:44.969 19:52:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:44.969 19:52:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:44.969 19:52:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.969 19:52:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.969 19:52:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.969 ************************************ 00:08:44.969 START TEST nvme_multi_secondary 00:08:44.969 ************************************ 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64136 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64137 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:44.969 19:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:48.263 Initializing NVMe Controllers 00:08:48.263 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.263 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:48.263 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:48.263 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:48.263 Initialization complete. Launching workers. 00:08:48.263 ======================================================== 00:08:48.263 Latency(us) 00:08:48.263 Device Information : IOPS MiB/s Average min max 00:08:48.263 PCIE (0000:00:10.0) NSID 1 from core 1: 7369.69 28.79 2169.67 727.01 7154.40 00:08:48.263 PCIE (0000:00:11.0) NSID 1 from core 1: 7369.69 28.79 2171.78 743.52 7449.14 00:08:48.263 PCIE (0000:00:13.0) NSID 1 from core 1: 7369.69 28.79 2172.21 739.18 7601.72 00:08:48.263 PCIE (0000:00:12.0) NSID 1 from core 1: 7369.69 28.79 2172.25 742.31 8089.35 00:08:48.263 PCIE (0000:00:12.0) NSID 2 from core 1: 7369.69 28.79 2172.26 737.69 7037.95 00:08:48.263 PCIE (0000:00:12.0) NSID 3 from core 1: 7369.69 28.79 2173.22 732.75 6625.51 00:08:48.263 ======================================================== 00:08:48.263 Total : 44218.13 172.73 2171.90 727.01 8089.35 00:08:48.263 00:08:48.263 Initializing NVMe Controllers 00:08:48.263 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.263 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:48.263 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:48.263 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:48.263 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:48.263 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:48.263 Initialization complete. Launching workers. 00:08:48.263 ======================================================== 00:08:48.263 Latency(us) 00:08:48.263 Device Information : IOPS MiB/s Average min max 00:08:48.263 PCIE (0000:00:10.0) NSID 1 from core 2: 2971.14 11.61 5383.92 934.18 15773.20 00:08:48.263 PCIE (0000:00:11.0) NSID 1 from core 2: 2971.14 11.61 5384.89 913.23 15164.24 00:08:48.263 PCIE (0000:00:13.0) NSID 1 from core 2: 2971.14 11.61 5384.84 945.14 13233.24 00:08:48.263 PCIE (0000:00:12.0) NSID 1 from core 2: 2971.14 11.61 5384.79 1025.09 17255.41 00:08:48.263 PCIE (0000:00:12.0) NSID 2 from core 2: 2971.14 11.61 5384.88 903.54 14392.49 00:08:48.263 PCIE (0000:00:12.0) NSID 3 from core 2: 2971.14 11.61 5384.87 914.80 14776.38 00:08:48.263 ======================================================== 00:08:48.263 Total : 17826.83 69.64 5384.70 903.54 17255.41 00:08:48.263 00:08:48.263 19:52:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64136 00:08:50.287 Initializing NVMe Controllers 00:08:50.287 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.287 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.287 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.287 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.287 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:50.287 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:50.287 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:50.287 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:50.287 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:50.287 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:50.287 Initialization complete. Launching workers. 00:08:50.287 ======================================================== 00:08:50.287 Latency(us) 00:08:50.287 Device Information : IOPS MiB/s Average min max 00:08:50.287 PCIE (0000:00:10.0) NSID 1 from core 0: 10779.08 42.11 1483.11 684.58 9058.67 00:08:50.287 PCIE (0000:00:11.0) NSID 1 from core 0: 10779.08 42.11 1483.96 709.59 8342.25 00:08:50.287 PCIE (0000:00:13.0) NSID 1 from core 0: 10779.08 42.11 1483.93 641.33 7648.40 00:08:50.287 PCIE (0000:00:12.0) NSID 1 from core 0: 10779.08 42.11 1483.91 616.44 8621.30 00:08:50.287 PCIE (0000:00:12.0) NSID 2 from core 0: 10779.08 42.11 1483.90 610.23 8719.37 00:08:50.287 PCIE (0000:00:12.0) NSID 3 from core 0: 10779.08 42.11 1483.88 588.90 10130.38 00:08:50.287 ======================================================== 00:08:50.287 Total : 64674.48 252.63 1483.78 588.90 10130.38 00:08:50.287 00:08:50.287 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64137 00:08:50.287 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64212 00:08:50.287 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:50.287 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64213 00:08:50.288 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:50.288 19:52:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:53.572 Initializing NVMe Controllers 00:08:53.572 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.572 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:53.572 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:53.572 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:53.572 Initialization complete. Launching workers. 00:08:53.572 ======================================================== 00:08:53.572 Latency(us) 00:08:53.572 Device Information : IOPS MiB/s Average min max 00:08:53.572 PCIE (0000:00:10.0) NSID 1 from core 1: 7684.66 30.02 2080.71 714.36 7586.06 00:08:53.572 PCIE (0000:00:11.0) NSID 1 from core 1: 7684.66 30.02 2081.80 727.82 6845.63 00:08:53.572 PCIE (0000:00:13.0) NSID 1 from core 1: 7684.66 30.02 2081.81 728.22 6557.87 00:08:53.572 PCIE (0000:00:12.0) NSID 1 from core 1: 7684.66 30.02 2082.95 738.24 7045.77 00:08:53.572 PCIE (0000:00:12.0) NSID 2 from core 1: 7684.66 30.02 2084.30 727.56 7825.90 00:08:53.572 PCIE (0000:00:12.0) NSID 3 from core 1: 7684.66 30.02 2084.42 728.42 8501.95 00:08:53.572 ======================================================== 00:08:53.572 Total : 46107.96 180.11 2082.67 714.36 8501.95 00:08:53.572 00:08:53.572 Initializing NVMe Controllers 00:08:53.572 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.572 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.572 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:53.572 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:53.572 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:53.572 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:53.572 Initialization complete. Launching workers. 00:08:53.572 ======================================================== 00:08:53.572 Latency(us) 00:08:53.572 Device Information : IOPS MiB/s Average min max 00:08:53.572 PCIE (0000:00:10.0) NSID 1 from core 0: 7448.59 29.10 2146.64 713.77 8759.09 00:08:53.572 PCIE (0000:00:11.0) NSID 1 from core 0: 7448.59 29.10 2148.01 732.00 7071.70 00:08:53.572 PCIE (0000:00:13.0) NSID 1 from core 0: 7448.59 29.10 2147.97 744.98 6968.59 00:08:53.572 PCIE (0000:00:12.0) NSID 1 from core 0: 7448.59 29.10 2147.94 729.54 7090.77 00:08:53.572 PCIE (0000:00:12.0) NSID 2 from core 0: 7448.59 29.10 2148.24 730.09 7691.75 00:08:53.572 PCIE (0000:00:12.0) NSID 3 from core 0: 7448.59 29.10 2148.21 735.47 7875.97 00:08:53.572 ======================================================== 00:08:53.572 Total : 44691.52 174.58 2147.84 713.77 8759.09 00:08:53.572 00:08:55.470 Initializing NVMe Controllers 00:08:55.470 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.470 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.470 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.470 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.470 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:55.470 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:55.470 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:55.470 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:55.470 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:55.470 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:55.470 Initialization complete. Launching workers. 00:08:55.470 ======================================================== 00:08:55.470 Latency(us) 00:08:55.470 Device Information : IOPS MiB/s Average min max 00:08:55.470 PCIE (0000:00:10.0) NSID 1 from core 2: 3102.27 12.12 5155.62 767.18 18375.71 00:08:55.470 PCIE (0000:00:11.0) NSID 1 from core 2: 3102.27 12.12 5157.67 725.21 16438.87 00:08:55.470 PCIE (0000:00:13.0) NSID 1 from core 2: 3102.27 12.12 5157.38 750.70 15732.55 00:08:55.470 PCIE (0000:00:12.0) NSID 1 from core 2: 3102.27 12.12 5157.61 744.37 15063.38 00:08:55.470 PCIE (0000:00:12.0) NSID 2 from core 2: 3102.27 12.12 5157.32 753.04 18747.97 00:08:55.470 PCIE (0000:00:12.0) NSID 3 from core 2: 3102.27 12.12 5157.54 771.38 17860.59 00:08:55.470 ======================================================== 00:08:55.470 Total : 18613.61 72.71 5157.19 725.21 18747.97 00:08:55.470 00:08:55.470 19:52:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64212 00:08:55.470 19:52:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64213 00:08:55.470 00:08:55.470 real 0m10.675s 00:08:55.470 user 0m18.389s 00:08:55.470 sys 0m0.644s 00:08:55.470 19:52:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.470 19:52:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:55.470 ************************************ 00:08:55.470 END TEST nvme_multi_secondary 00:08:55.470 ************************************ 00:08:55.729 19:52:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:55.729 19:52:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:55.729 19:52:39 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63169 ]] 00:08:55.729 19:52:39 nvme -- common/autotest_common.sh@1090 -- # kill 63169 00:08:55.729 19:52:39 nvme -- common/autotest_common.sh@1091 -- # wait 63169 00:08:55.729 [2024-09-30 19:52:39.840747] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.840845] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.840882] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.840907] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.844778] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.844890] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.844931] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.844961] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.848919] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.849002] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.849028] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.849055] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.852929] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.853015] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.853043] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 [2024-09-30 19:52:39.853070] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:55.729 19:52:40 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:55.729 19:52:40 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:55.729 19:52:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:55.729 19:52:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.729 19:52:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.729 19:52:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.729 ************************************ 00:08:55.729 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:55.729 ************************************ 00:08:55.729 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:55.987 * Looking for test storage... 00:08:55.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.987 --rc genhtml_branch_coverage=1 00:08:55.987 --rc genhtml_function_coverage=1 00:08:55.987 --rc genhtml_legend=1 00:08:55.987 --rc geninfo_all_blocks=1 00:08:55.987 --rc geninfo_unexecuted_blocks=1 00:08:55.987 00:08:55.987 ' 00:08:55.987 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.987 --rc genhtml_branch_coverage=1 00:08:55.987 --rc genhtml_function_coverage=1 00:08:55.988 --rc genhtml_legend=1 00:08:55.988 --rc geninfo_all_blocks=1 00:08:55.988 --rc geninfo_unexecuted_blocks=1 00:08:55.988 00:08:55.988 ' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:55.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.988 --rc genhtml_branch_coverage=1 00:08:55.988 --rc genhtml_function_coverage=1 00:08:55.988 --rc genhtml_legend=1 00:08:55.988 --rc geninfo_all_blocks=1 00:08:55.988 --rc geninfo_unexecuted_blocks=1 00:08:55.988 00:08:55.988 ' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:55.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.988 --rc genhtml_branch_coverage=1 00:08:55.988 --rc genhtml_function_coverage=1 00:08:55.988 --rc genhtml_legend=1 00:08:55.988 --rc geninfo_all_blocks=1 00:08:55.988 --rc geninfo_unexecuted_blocks=1 00:08:55.988 00:08:55.988 ' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64369 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64369 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64369 ']' 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.988 19:52:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.988 [2024-09-30 19:52:40.310580] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:55.988 [2024-09-30 19:52:40.310682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64369 ] 00:08:56.246 [2024-09-30 19:52:40.464864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.505 [2024-09-30 19:52:40.647708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.505 [2024-09-30 19:52:40.647978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.505 [2024-09-30 19:52:40.648190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.505 [2024-09-30 19:52:40.648205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.071 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.072 nvme0n1 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iiu23.txt 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.072 true 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727725961 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64392 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:57.072 19:52:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:58.973 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:58.973 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.973 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:58.973 [2024-09-30 19:52:43.333279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:58.973 [2024-09-30 19:52:43.333551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:58.973 [2024-09-30 19:52:43.333575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:58.973 [2024-09-30 19:52:43.333588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:58.973 [2024-09-30 19:52:43.335651] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:58.973 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.232 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64392 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64392 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64392 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iiu23.txt 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iiu23.txt 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64369 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64369 ']' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64369 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64369 00:08:59.232 killing process with pid 64369 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64369' 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64369 00:08:59.232 19:52:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64369 00:09:00.611 19:52:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:00.611 19:52:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:00.611 ************************************ 00:09:00.611 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:00.611 ************************************ 00:09:00.611 00:09:00.611 real 0m4.666s 00:09:00.611 user 0m16.156s 00:09:00.611 sys 0m0.513s 00:09:00.611 19:52:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.611 19:52:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.611 19:52:44 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:00.611 19:52:44 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:00.611 19:52:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.611 19:52:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.611 19:52:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.611 ************************************ 00:09:00.611 START TEST nvme_fio 00:09:00.611 ************************************ 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:00.611 19:52:44 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.611 19:52:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.884 19:52:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:00.884 19:52:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:01.153 19:52:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:01.153 19:52:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:01.153 19:52:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:01.153 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:01.153 fio-3.35 00:09:01.153 Starting 1 thread 00:09:07.712 00:09:07.712 test: (groupid=0, jobs=1): err= 0: pid=64526: Mon Sep 30 19:52:51 2024 00:09:07.712 read: IOPS=23.0k, BW=89.8MiB/s (94.2MB/s)(180MiB/2001msec) 00:09:07.712 slat (nsec): min=3375, max=77825, avg=4897.99, stdev=2179.03 00:09:07.712 clat (usec): min=245, max=7946, avg=2772.46, stdev=914.94 00:09:07.712 lat (usec): min=249, max=7959, avg=2777.36, stdev=915.99 00:09:07.712 clat percentiles (usec): 00:09:07.712 | 1.00th=[ 1401], 5.00th=[ 1975], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:07.712 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:07.712 | 70.00th=[ 2671], 80.00th=[ 2966], 90.00th=[ 3982], 95.00th=[ 5014], 00:09:07.712 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7373], 99.95th=[ 7504], 00:09:07.712 | 99.99th=[ 7832] 00:09:07.712 bw ( KiB/s): min=88680, max=99136, per=100.00%, avg=95068.33, stdev=5600.99, samples=3 00:09:07.712 iops : min=22170, max=24784, avg=23767.00, stdev=1400.20, samples=3 00:09:07.712 write: IOPS=22.8k, BW=89.3MiB/s (93.6MB/s)(179MiB/2001msec); 0 zone resets 00:09:07.712 slat (nsec): min=3431, max=84100, avg=5077.53, stdev=2087.33 00:09:07.712 clat (usec): min=229, max=7871, avg=2790.63, stdev=925.21 00:09:07.712 lat (usec): min=234, max=7885, avg=2795.70, stdev=926.22 00:09:07.712 clat percentiles (usec): 00:09:07.712 | 1.00th=[ 1385], 5.00th=[ 1991], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:07.712 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:09:07.712 | 70.00th=[ 2704], 80.00th=[ 2999], 90.00th=[ 4015], 95.00th=[ 5080], 00:09:07.712 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7373], 99.95th=[ 7504], 00:09:07.712 | 99.99th=[ 7767] 00:09:07.712 bw ( KiB/s): min=89136, max=98920, per=100.00%, avg=95041.67, stdev=5197.52, samples=3 00:09:07.712 iops : min=22284, max=24730, avg=23760.33, stdev=1299.33, samples=3 00:09:07.712 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.17% 00:09:07.712 lat (msec) : 2=5.04%, 4=84.81%, 10=9.96% 00:09:07.712 cpu : usr=99.05%, sys=0.20%, ctx=4, majf=0, minf=608 00:09:07.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.712 issued rwts: total=45996,45721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.712 00:09:07.712 Run status group 0 (all jobs): 00:09:07.712 READ: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:07.712 WRITE: bw=89.3MiB/s (93.6MB/s), 89.3MiB/s-89.3MiB/s (93.6MB/s-93.6MB/s), io=179MiB (187MB), run=2001-2001msec 00:09:07.712 ----------------------------------------------------- 00:09:07.712 Suppressions used: 00:09:07.712 count bytes template 00:09:07.712 1 32 /usr/src/fio/parse.c 00:09:07.712 1 8 libtcmalloc_minimal.so 00:09:07.712 ----------------------------------------------------- 00:09:07.712 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.712 19:52:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.712 19:52:51 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:07.712 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.712 fio-3.35 00:09:07.712 Starting 1 thread 00:09:14.266 00:09:14.266 test: (groupid=0, jobs=1): err= 0: pid=64582: Mon Sep 30 19:52:58 2024 00:09:14.266 read: IOPS=24.5k, BW=95.7MiB/s (100MB/s)(192MiB/2001msec) 00:09:14.266 slat (nsec): min=3343, max=97655, avg=4898.23, stdev=2158.04 00:09:14.266 clat (usec): min=466, max=7662, avg=2605.93, stdev=809.89 00:09:14.266 lat (usec): min=476, max=7671, avg=2610.83, stdev=811.30 00:09:14.266 clat percentiles (usec): 00:09:14.266 | 1.00th=[ 1516], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2343], 00:09:14.266 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:14.266 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2966], 95.00th=[ 4228], 00:09:14.266 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:14.266 | 99.99th=[ 7570] 00:09:14.266 bw ( KiB/s): min=93616, max=98472, per=97.99%, avg=96032.00, stdev=2428.09, samples=3 00:09:14.266 iops : min=23404, max=24618, avg=24008.00, stdev=607.02, samples=3 00:09:14.266 write: IOPS=24.3k, BW=95.1MiB/s (99.7MB/s)(190MiB/2001msec); 0 zone resets 00:09:14.266 slat (nsec): min=3437, max=81355, avg=5152.44, stdev=2134.30 00:09:14.266 clat (usec): min=563, max=7673, avg=2615.36, stdev=816.84 00:09:14.266 lat (usec): min=573, max=7686, avg=2620.51, stdev=818.26 00:09:14.266 clat percentiles (usec): 00:09:14.266 | 1.00th=[ 1549], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:14.266 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:14.266 | 70.00th=[ 2474], 80.00th=[ 2540], 90.00th=[ 2999], 95.00th=[ 4359], 00:09:14.266 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7439], 00:09:14.266 | 99.99th=[ 7635] 00:09:14.266 bw ( KiB/s): min=93008, max=99584, per=98.74%, avg=96112.00, stdev=3303.41, samples=3 00:09:14.266 iops : min=23252, max=24896, avg=24028.00, stdev=825.85, samples=3 00:09:14.266 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.11% 00:09:14.266 lat (msec) : 2=3.42%, 4=90.51%, 10=5.95% 00:09:14.266 cpu : usr=99.05%, sys=0.25%, ctx=3, majf=0, minf=607 00:09:14.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.266 issued rwts: total=49025,48694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.266 00:09:14.266 Run status group 0 (all jobs): 00:09:14.266 READ: bw=95.7MiB/s (100MB/s), 95.7MiB/s-95.7MiB/s (100MB/s-100MB/s), io=192MiB (201MB), run=2001-2001msec 00:09:14.266 WRITE: bw=95.1MiB/s (99.7MB/s), 95.1MiB/s-95.1MiB/s (99.7MB/s-99.7MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:14.266 ----------------------------------------------------- 00:09:14.266 Suppressions used: 00:09:14.266 count bytes template 00:09:14.266 1 32 /usr/src/fio/parse.c 00:09:14.266 1 8 libtcmalloc_minimal.so 00:09:14.266 ----------------------------------------------------- 00:09:14.266 00:09:14.266 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.266 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:14.266 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:14.266 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:14.524 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:14.524 19:52:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:14.793 19:52:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:14.793 19:52:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:14.793 19:52:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:15.050 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:15.050 fio-3.35 00:09:15.051 Starting 1 thread 00:09:21.619 00:09:21.619 test: (groupid=0, jobs=1): err= 0: pid=64643: Mon Sep 30 19:53:05 2024 00:09:21.619 read: IOPS=20.3k, BW=79.1MiB/s (83.0MB/s)(158MiB/2001msec) 00:09:21.619 slat (usec): min=3, max=110, avg= 5.23, stdev= 2.64 00:09:21.619 clat (usec): min=235, max=8865, avg=3134.18, stdev=1155.97 00:09:21.619 lat (usec): min=239, max=8902, avg=3139.41, stdev=1157.23 00:09:21.619 clat percentiles (usec): 00:09:21.619 | 1.00th=[ 1729], 5.00th=[ 2073], 10.00th=[ 2147], 20.00th=[ 2343], 00:09:21.619 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2704], 60.00th=[ 2900], 00:09:21.619 | 70.00th=[ 3163], 80.00th=[ 3884], 90.00th=[ 5014], 95.00th=[ 5669], 00:09:21.619 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 8029], 00:09:21.619 | 99.99th=[ 8356] 00:09:21.619 bw ( KiB/s): min=77784, max=83456, per=100.00%, avg=81432.00, stdev=3165.58, samples=3 00:09:21.619 iops : min=19446, max=20864, avg=20358.00, stdev=791.40, samples=3 00:09:21.619 write: IOPS=20.2k, BW=79.0MiB/s (82.8MB/s)(158MiB/2001msec); 0 zone resets 00:09:21.619 slat (usec): min=3, max=363, avg= 5.39, stdev= 3.61 00:09:21.619 clat (usec): min=204, max=8399, avg=3164.25, stdev=1166.69 00:09:21.619 lat (usec): min=209, max=8404, avg=3169.65, stdev=1167.95 00:09:21.619 clat percentiles (usec): 00:09:21.619 | 1.00th=[ 1729], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:21.619 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2737], 60.00th=[ 2933], 00:09:21.619 | 70.00th=[ 3195], 80.00th=[ 3949], 90.00th=[ 5080], 95.00th=[ 5735], 00:09:21.619 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 7767], 99.95th=[ 7963], 00:09:21.619 | 99.99th=[ 8160] 00:09:21.619 bw ( KiB/s): min=78120, max=83432, per=100.00%, avg=81565.33, stdev=2987.22, samples=3 00:09:21.619 iops : min=19530, max=20858, avg=20391.33, stdev=746.80, samples=3 00:09:21.619 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.06% 00:09:21.619 lat (msec) : 2=2.52%, 4=78.22%, 10=19.17% 00:09:21.619 cpu : usr=98.50%, sys=0.30%, ctx=13, majf=0, minf=607 00:09:21.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.619 issued rwts: total=40543,40458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.619 00:09:21.619 Run status group 0 (all jobs): 00:09:21.619 READ: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:21.619 WRITE: bw=79.0MiB/s (82.8MB/s), 79.0MiB/s-79.0MiB/s (82.8MB/s-82.8MB/s), io=158MiB (166MB), run=2001-2001msec 00:09:21.619 ----------------------------------------------------- 00:09:21.619 Suppressions used: 00:09:21.619 count bytes template 00:09:21.619 1 32 /usr/src/fio/parse.c 00:09:21.619 1 8 libtcmalloc_minimal.so 00:09:21.619 ----------------------------------------------------- 00:09:21.619 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:21.619 19:53:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:21.619 19:53:05 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:21.619 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:21.619 fio-3.35 00:09:21.619 Starting 1 thread 00:09:29.778 00:09:29.778 test: (groupid=0, jobs=1): err= 0: pid=64705: Mon Sep 30 19:53:12 2024 00:09:29.778 read: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(131MiB/2001msec) 00:09:29.778 slat (usec): min=4, max=820, avg= 5.92, stdev= 5.64 00:09:29.778 clat (usec): min=689, max=10827, avg=3782.90, stdev=1448.98 00:09:29.778 lat (usec): min=695, max=10845, avg=3788.82, stdev=1450.31 00:09:29.778 clat percentiles (usec): 00:09:29.778 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2606], 00:09:29.778 | 30.00th=[ 2737], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3687], 00:09:29.778 | 70.00th=[ 4359], 80.00th=[ 5145], 90.00th=[ 5997], 95.00th=[ 6652], 00:09:29.778 | 99.00th=[ 7767], 99.50th=[ 8160], 99.90th=[ 9372], 99.95th=[ 9503], 00:09:29.778 | 99.99th=[10290] 00:09:29.778 bw ( KiB/s): min=59432, max=75040, per=98.71%, avg=66232.00, stdev=7995.40, samples=3 00:09:29.778 iops : min=14858, max=18760, avg=16558.00, stdev=1998.85, samples=3 00:09:29.778 write: IOPS=16.8k, BW=65.7MiB/s (68.8MB/s)(131MiB/2001msec); 0 zone resets 00:09:29.778 slat (usec): min=4, max=389, avg= 5.96, stdev= 4.12 00:09:29.778 clat (usec): min=722, max=10384, avg=3812.79, stdev=1452.88 00:09:29.778 lat (usec): min=729, max=10402, avg=3818.75, stdev=1454.13 00:09:29.778 clat percentiles (usec): 00:09:29.778 | 1.00th=[ 2057], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:09:29.778 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3752], 00:09:29.778 | 70.00th=[ 4424], 80.00th=[ 5145], 90.00th=[ 5997], 95.00th=[ 6718], 00:09:29.778 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9372], 99.95th=[ 9634], 00:09:29.778 | 99.99th=[10290] 00:09:29.778 bw ( KiB/s): min=59272, max=74928, per=98.41%, avg=66160.00, stdev=7995.52, samples=3 00:09:29.778 iops : min=14818, max=18732, avg=16540.00, stdev=1998.88, samples=3 00:09:29.778 lat (usec) : 750=0.01%, 1000=0.02% 00:09:29.778 lat (msec) : 2=0.69%, 4=63.73%, 10=35.53%, 20=0.02% 00:09:29.778 cpu : usr=98.00%, sys=0.35%, ctx=4, majf=0, minf=605 00:09:29.778 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:29.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.778 issued rwts: total=33566,33631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.778 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.778 00:09:29.778 Run status group 0 (all jobs): 00:09:29.778 READ: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=131MiB (137MB), run=2001-2001msec 00:09:29.778 WRITE: bw=65.7MiB/s (68.8MB/s), 65.7MiB/s-65.7MiB/s (68.8MB/s-68.8MB/s), io=131MiB (138MB), run=2001-2001msec 00:09:29.778 ----------------------------------------------------- 00:09:29.778 Suppressions used: 00:09:29.778 count bytes template 00:09:29.778 1 32 /usr/src/fio/parse.c 00:09:29.778 1 8 libtcmalloc_minimal.so 00:09:29.778 ----------------------------------------------------- 00:09:29.778 00:09:29.778 19:53:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:29.778 19:53:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:29.778 00:09:29.778 real 0m28.462s 00:09:29.778 user 0m16.470s 00:09:29.778 sys 0m22.349s 00:09:29.778 19:53:13 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.778 19:53:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:29.778 ************************************ 00:09:29.778 END TEST nvme_fio 00:09:29.778 ************************************ 00:09:29.778 00:09:29.778 real 1m37.150s 00:09:29.778 user 3m35.649s 00:09:29.778 sys 0m32.617s 00:09:29.778 19:53:13 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.778 19:53:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.778 ************************************ 00:09:29.778 END TEST nvme 00:09:29.778 ************************************ 00:09:29.778 19:53:13 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:29.778 19:53:13 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:29.778 19:53:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.778 19:53:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.778 19:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:29.778 ************************************ 00:09:29.778 START TEST nvme_scc 00:09:29.778 ************************************ 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:29.778 * Looking for test storage... 00:09:29.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.778 19:53:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.778 --rc genhtml_branch_coverage=1 00:09:29.778 --rc genhtml_function_coverage=1 00:09:29.778 --rc genhtml_legend=1 00:09:29.778 --rc geninfo_all_blocks=1 00:09:29.778 --rc geninfo_unexecuted_blocks=1 00:09:29.778 00:09:29.778 ' 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.778 --rc genhtml_branch_coverage=1 00:09:29.778 --rc genhtml_function_coverage=1 00:09:29.778 --rc genhtml_legend=1 00:09:29.778 --rc geninfo_all_blocks=1 00:09:29.778 --rc geninfo_unexecuted_blocks=1 00:09:29.778 00:09:29.778 ' 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.778 --rc genhtml_branch_coverage=1 00:09:29.778 --rc genhtml_function_coverage=1 00:09:29.778 --rc genhtml_legend=1 00:09:29.778 --rc geninfo_all_blocks=1 00:09:29.778 --rc geninfo_unexecuted_blocks=1 00:09:29.778 00:09:29.778 ' 00:09:29.778 19:53:13 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.778 --rc genhtml_branch_coverage=1 00:09:29.778 --rc genhtml_function_coverage=1 00:09:29.778 --rc genhtml_legend=1 00:09:29.778 --rc geninfo_all_blocks=1 00:09:29.778 --rc geninfo_unexecuted_blocks=1 00:09:29.778 00:09:29.778 ' 00:09:29.778 19:53:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:29.778 19:53:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.779 19:53:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.779 19:53:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.779 19:53:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.779 19:53:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.779 19:53:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.779 19:53:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.779 19:53:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.779 19:53:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:29.779 19:53:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:29.779 19:53:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:29.779 19:53:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.779 19:53:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:29.779 19:53:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:29.779 19:53:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:29.779 19:53:13 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:29.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.779 Waiting for block devices as requested 00:09:29.779 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.779 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.779 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.779 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.080 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:35.080 19:53:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:35.080 19:53:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.080 19:53:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.080 19:53:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.080 19:53:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:35.080 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:35.081 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:35.082 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:35.083 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.084 19:53:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:35.085 19:53:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.085 19:53:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.085 19:53:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.085 19:53:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:35.085 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.086 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.087 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.088 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:35.089 19:53:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.089 19:53:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.089 19:53:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.089 19:53:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.089 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.090 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.091 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.092 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.093 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.094 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.358 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.359 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.360 19:53:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:35.361 19:53:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.361 19:53:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:35.361 19:53:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.361 19:53:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.361 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.362 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:35.363 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:35.364 19:53:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:35.364 19:53:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:35.364 19:53:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:35.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.191 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.191 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.191 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.191 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.191 19:53:20 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:36.191 19:53:20 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:36.191 19:53:20 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.191 19:53:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:36.191 ************************************ 00:09:36.191 START TEST nvme_simple_copy 00:09:36.191 ************************************ 00:09:36.191 19:53:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:36.449 Initializing NVMe Controllers 00:09:36.449 Attaching to 0000:00:10.0 00:09:36.449 Controller supports SCC. Attached to 0000:00:10.0 00:09:36.449 Namespace ID: 1 size: 6GB 00:09:36.449 Initialization complete. 00:09:36.449 00:09:36.449 Controller QEMU NVMe Ctrl (12340 ) 00:09:36.449 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:36.449 Namespace Block Size:4096 00:09:36.449 Writing LBAs 0 to 63 with Random Data 00:09:36.449 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:36.449 LBAs matching Written Data: 64 00:09:36.449 00:09:36.449 real 0m0.243s 00:09:36.449 user 0m0.079s 00:09:36.449 sys 0m0.062s 00:09:36.449 ************************************ 00:09:36.449 END TEST nvme_simple_copy 00:09:36.449 ************************************ 00:09:36.449 19:53:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.449 19:53:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:36.449 ************************************ 00:09:36.449 END TEST nvme_scc 00:09:36.449 ************************************ 00:09:36.449 00:09:36.449 real 0m7.499s 00:09:36.449 user 0m1.040s 00:09:36.449 sys 0m1.341s 00:09:36.449 19:53:20 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.449 19:53:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 19:53:20 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:36.708 19:53:20 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:36.708 19:53:20 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:36.708 19:53:20 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:36.708 19:53:20 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:36.708 19:53:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:36.708 19:53:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.708 19:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:36.708 ************************************ 00:09:36.708 START TEST nvme_fdp 00:09:36.708 ************************************ 00:09:36.708 19:53:20 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:36.708 * Looking for test storage... 00:09:36.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.708 19:53:20 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:36.708 19:53:20 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:36.708 19:53:20 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:36.708 19:53:20 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.708 19:53:20 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:36.709 19:53:20 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.709 19:53:20 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.709 --rc genhtml_branch_coverage=1 00:09:36.709 --rc genhtml_function_coverage=1 00:09:36.709 --rc genhtml_legend=1 00:09:36.709 --rc geninfo_all_blocks=1 00:09:36.709 --rc geninfo_unexecuted_blocks=1 00:09:36.709 00:09:36.709 ' 00:09:36.709 19:53:20 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.709 --rc genhtml_branch_coverage=1 00:09:36.709 --rc genhtml_function_coverage=1 00:09:36.709 --rc genhtml_legend=1 00:09:36.709 --rc geninfo_all_blocks=1 00:09:36.709 --rc geninfo_unexecuted_blocks=1 00:09:36.709 00:09:36.709 ' 00:09:36.709 19:53:20 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.709 --rc genhtml_branch_coverage=1 00:09:36.709 --rc genhtml_function_coverage=1 00:09:36.709 --rc genhtml_legend=1 00:09:36.709 --rc geninfo_all_blocks=1 00:09:36.709 --rc geninfo_unexecuted_blocks=1 00:09:36.709 00:09:36.709 ' 00:09:36.709 19:53:20 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:36.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.709 --rc genhtml_branch_coverage=1 00:09:36.709 --rc genhtml_function_coverage=1 00:09:36.709 --rc genhtml_legend=1 00:09:36.709 --rc geninfo_all_blocks=1 00:09:36.709 --rc geninfo_unexecuted_blocks=1 00:09:36.709 00:09:36.709 ' 00:09:36.709 19:53:20 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.709 19:53:20 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.709 19:53:20 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.709 19:53:20 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.709 19:53:20 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.709 19:53:20 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:36.709 19:53:20 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:36.709 19:53:20 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:36.709 19:53:20 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.709 19:53:20 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.226 Waiting for block devices as requested 00:09:37.226 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.226 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.226 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.484 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.758 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.758 19:53:26 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:42.758 19:53:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.758 19:53:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:42.758 19:53:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.758 19:53:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.758 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:42.759 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.760 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:42.761 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.762 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:42.763 19:53:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.763 19:53:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:42.763 19:53:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.763 19:53:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:42.763 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:42.764 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:42.765 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:42.766 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.767 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:42.768 19:53:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.768 19:53:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:42.768 19:53:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.768 19:53:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.768 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.769 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:42.770 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.771 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:42.772 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:42.773 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:42.774 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.775 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:42.776 19:53:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.776 19:53:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:42.776 19:53:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.776 19:53:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.776 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:42.777 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:42.778 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:42.779 19:53:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.779 19:53:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.780 19:53:26 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:42.780 19:53:26 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:42.780 19:53:26 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:42.780 19:53:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:42.780 19:53:26 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:42.780 19:53:26 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:43.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:43.607 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.607 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.607 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.607 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.865 19:53:27 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:43.865 19:53:27 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:43.866 19:53:27 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.866 19:53:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:43.866 ************************************ 00:09:43.866 START TEST nvme_flexible_data_placement 00:09:43.866 ************************************ 00:09:43.866 19:53:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:43.866 Initializing NVMe Controllers 00:09:43.866 Attaching to 0000:00:13.0 00:09:43.866 Controller supports FDP Attached to 0000:00:13.0 00:09:43.866 Namespace ID: 1 Endurance Group ID: 1 00:09:43.866 Initialization complete. 00:09:43.866 00:09:43.866 ================================== 00:09:43.866 == FDP tests for Namespace: #01 == 00:09:43.866 ================================== 00:09:43.866 00:09:43.866 Get Feature: FDP: 00:09:43.866 ================= 00:09:43.866 Enabled: Yes 00:09:43.866 FDP configuration Index: 0 00:09:43.866 00:09:43.866 FDP configurations log page 00:09:43.866 =========================== 00:09:43.866 Number of FDP configurations: 1 00:09:43.866 Version: 0 00:09:43.866 Size: 112 00:09:43.866 FDP Configuration Descriptor: 0 00:09:43.866 Descriptor Size: 96 00:09:43.866 Reclaim Group Identifier format: 2 00:09:43.866 FDP Volatile Write Cache: Not Present 00:09:43.866 FDP Configuration: Valid 00:09:43.866 Vendor Specific Size: 0 00:09:43.866 Number of Reclaim Groups: 2 00:09:43.866 Number of Recalim Unit Handles: 8 00:09:43.866 Max Placement Identifiers: 128 00:09:43.866 Number of Namespaces Suppprted: 256 00:09:43.866 Reclaim unit Nominal Size: 6000000 bytes 00:09:43.866 Estimated Reclaim Unit Time Limit: Not Reported 00:09:43.866 RUH Desc #000: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #001: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #002: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #003: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #004: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #005: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #006: RUH Type: Initially Isolated 00:09:43.866 RUH Desc #007: RUH Type: Initially Isolated 00:09:43.866 00:09:43.866 FDP reclaim unit handle usage log page 00:09:43.866 ====================================== 00:09:43.866 Number of Reclaim Unit Handles: 8 00:09:43.866 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:43.866 RUH Usage Desc #001: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #002: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #003: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #004: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #005: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #006: RUH Attributes: Unused 00:09:43.866 RUH Usage Desc #007: RUH Attributes: Unused 00:09:43.866 00:09:43.866 FDP statistics log page 00:09:43.866 ======================= 00:09:43.866 Host bytes with metadata written: 1067360256 00:09:43.866 Media bytes with metadata written: 1067638784 00:09:43.866 Media bytes erased: 0 00:09:43.866 00:09:43.866 FDP Reclaim unit handle status 00:09:43.866 ============================== 00:09:43.866 Number of RUHS descriptors: 2 00:09:43.866 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002616 00:09:43.866 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:43.866 00:09:43.866 FDP write on placement id: 0 success 00:09:43.866 00:09:43.866 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:43.866 00:09:43.866 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:43.866 00:09:43.866 Get Feature: FDP Events for Placement handle: #0 00:09:43.866 ======================== 00:09:43.866 Number of FDP Events: 6 00:09:43.866 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:43.866 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:43.866 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:43.866 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:43.866 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:43.866 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:43.866 00:09:43.866 FDP events log page 00:09:43.866 =================== 00:09:43.866 Number of FDP events: 1 00:09:43.866 FDP Event #0: 00:09:43.866 Event Type: RU Not Written to Capacity 00:09:43.866 Placement Identifier: Valid 00:09:43.866 NSID: Valid 00:09:43.866 Location: Valid 00:09:43.866 Placement Identifier: 0 00:09:43.866 Event Timestamp: 5 00:09:43.866 Namespace Identifier: 1 00:09:43.866 Reclaim Group Identifier: 0 00:09:43.866 Reclaim Unit Handle Identifier: 0 00:09:43.866 00:09:43.866 FDP test passed 00:09:44.124 ************************************ 00:09:44.124 END TEST nvme_flexible_data_placement 00:09:44.124 ************************************ 00:09:44.124 00:09:44.124 real 0m0.225s 00:09:44.124 user 0m0.060s 00:09:44.124 sys 0m0.064s 00:09:44.124 19:53:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.124 19:53:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 ************************************ 00:09:44.124 END TEST nvme_fdp 00:09:44.124 ************************************ 00:09:44.124 00:09:44.124 real 0m7.445s 00:09:44.124 user 0m0.960s 00:09:44.124 sys 0m1.363s 00:09:44.124 19:53:28 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.124 19:53:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 19:53:28 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:44.124 19:53:28 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:44.124 19:53:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.124 19:53:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.124 19:53:28 -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 ************************************ 00:09:44.124 START TEST nvme_rpc 00:09:44.124 ************************************ 00:09:44.124 19:53:28 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:44.124 * Looking for test storage... 00:09:44.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.124 19:53:28 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.125 19:53:28 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.125 --rc genhtml_branch_coverage=1 00:09:44.125 --rc genhtml_function_coverage=1 00:09:44.125 --rc genhtml_legend=1 00:09:44.125 --rc geninfo_all_blocks=1 00:09:44.125 --rc geninfo_unexecuted_blocks=1 00:09:44.125 00:09:44.125 ' 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.125 --rc genhtml_branch_coverage=1 00:09:44.125 --rc genhtml_function_coverage=1 00:09:44.125 --rc genhtml_legend=1 00:09:44.125 --rc geninfo_all_blocks=1 00:09:44.125 --rc geninfo_unexecuted_blocks=1 00:09:44.125 00:09:44.125 ' 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.125 --rc genhtml_branch_coverage=1 00:09:44.125 --rc genhtml_function_coverage=1 00:09:44.125 --rc genhtml_legend=1 00:09:44.125 --rc geninfo_all_blocks=1 00:09:44.125 --rc geninfo_unexecuted_blocks=1 00:09:44.125 00:09:44.125 ' 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.125 --rc genhtml_branch_coverage=1 00:09:44.125 --rc genhtml_function_coverage=1 00:09:44.125 --rc genhtml_legend=1 00:09:44.125 --rc geninfo_all_blocks=1 00:09:44.125 --rc geninfo_unexecuted_blocks=1 00:09:44.125 00:09:44.125 ' 00:09:44.125 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.125 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:44.125 19:53:28 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:44.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.384 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:44.384 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66057 00:09:44.384 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:44.384 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66057 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66057 ']' 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.384 19:53:28 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.384 19:53:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.384 [2024-09-30 19:53:28.581617] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:44.384 [2024-09-30 19:53:28.581735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66057 ] 00:09:44.384 [2024-09-30 19:53:28.730166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.642 [2024-09-30 19:53:28.908924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.642 [2024-09-30 19:53:28.909106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.206 19:53:29 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.206 19:53:29 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:45.206 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:45.465 Nvme0n1 00:09:45.465 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:45.465 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:45.724 request: 00:09:45.724 { 00:09:45.724 "bdev_name": "Nvme0n1", 00:09:45.724 "filename": "non_existing_file", 00:09:45.724 "method": "bdev_nvme_apply_firmware", 00:09:45.724 "req_id": 1 00:09:45.724 } 00:09:45.724 Got JSON-RPC error response 00:09:45.724 response: 00:09:45.724 { 00:09:45.724 "code": -32603, 00:09:45.724 "message": "open file failed." 00:09:45.724 } 00:09:45.724 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:45.724 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:45.724 19:53:29 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:45.981 19:53:30 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:45.981 19:53:30 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66057 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66057 ']' 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66057 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66057 00:09:45.981 killing process with pid 66057 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66057' 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66057 00:09:45.981 19:53:30 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66057 00:09:47.882 ************************************ 00:09:47.882 END TEST nvme_rpc 00:09:47.882 ************************************ 00:09:47.882 00:09:47.882 real 0m3.550s 00:09:47.882 user 0m6.515s 00:09:47.882 sys 0m0.568s 00:09:47.882 19:53:31 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.882 19:53:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.882 19:53:31 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:47.882 19:53:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:47.882 19:53:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.882 19:53:31 -- common/autotest_common.sh@10 -- # set +x 00:09:47.882 ************************************ 00:09:47.882 START TEST nvme_rpc_timeouts 00:09:47.882 ************************************ 00:09:47.882 19:53:31 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:47.882 * Looking for test storage... 00:09:47.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.882 19:53:31 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.882 19:53:31 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.882 19:53:31 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.882 19:53:32 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.882 --rc genhtml_branch_coverage=1 00:09:47.882 --rc genhtml_function_coverage=1 00:09:47.882 --rc genhtml_legend=1 00:09:47.882 --rc geninfo_all_blocks=1 00:09:47.882 --rc geninfo_unexecuted_blocks=1 00:09:47.882 00:09:47.882 ' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.882 --rc genhtml_branch_coverage=1 00:09:47.882 --rc genhtml_function_coverage=1 00:09:47.882 --rc genhtml_legend=1 00:09:47.882 --rc geninfo_all_blocks=1 00:09:47.882 --rc geninfo_unexecuted_blocks=1 00:09:47.882 00:09:47.882 ' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.882 --rc genhtml_branch_coverage=1 00:09:47.882 --rc genhtml_function_coverage=1 00:09:47.882 --rc genhtml_legend=1 00:09:47.882 --rc geninfo_all_blocks=1 00:09:47.882 --rc geninfo_unexecuted_blocks=1 00:09:47.882 00:09:47.882 ' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.882 --rc genhtml_branch_coverage=1 00:09:47.882 --rc genhtml_function_coverage=1 00:09:47.882 --rc genhtml_legend=1 00:09:47.882 --rc geninfo_all_blocks=1 00:09:47.882 --rc geninfo_unexecuted_blocks=1 00:09:47.882 00:09:47.882 ' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66122 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66122 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66159 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66159 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66159 ']' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.882 19:53:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.882 19:53:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:47.882 [2024-09-30 19:53:32.126821] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:47.882 [2024-09-30 19:53:32.126949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66159 ] 00:09:48.139 [2024-09-30 19:53:32.277676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.139 [2024-09-30 19:53:32.479471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.140 [2024-09-30 19:53:32.479577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.069 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.069 Checking default timeout settings: 00:09:49.069 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:49.069 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:49.069 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.326 Making settings changes with rpc: 00:09:49.326 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:49.326 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:49.326 Check default vs. modified settings: 00:09:49.326 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:49.326 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.583 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:49.583 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.583 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.583 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66122 00:09:49.583 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.840 Setting action_on_timeout is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.840 Setting timeout_us is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:49.840 Setting timeout_admin_us is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66122 /tmp/settings_modified_66122 00:09:49.840 19:53:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66159 00:09:49.841 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66159 ']' 00:09:49.841 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66159 00:09:49.841 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:49.841 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.841 19:53:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66159 00:09:49.841 killing process with pid 66159 00:09:49.841 19:53:34 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.841 19:53:34 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.841 19:53:34 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66159' 00:09:49.841 19:53:34 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66159 00:09:49.841 19:53:34 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66159 00:09:51.227 RPC TIMEOUT SETTING TEST PASSED. 00:09:51.227 19:53:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:51.227 00:09:51.227 real 0m3.568s 00:09:51.227 user 0m6.655s 00:09:51.227 sys 0m0.562s 00:09:51.227 ************************************ 00:09:51.227 END TEST nvme_rpc_timeouts 00:09:51.227 ************************************ 00:09:51.227 19:53:35 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.227 19:53:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:51.227 19:53:35 -- spdk/autotest.sh@239 -- # uname -s 00:09:51.227 19:53:35 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:51.227 19:53:35 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.227 19:53:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.227 19:53:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.227 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:09:51.227 ************************************ 00:09:51.227 START TEST sw_hotplug 00:09:51.227 ************************************ 00:09:51.227 19:53:35 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.227 * Looking for test storage... 00:09:51.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.227 19:53:35 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:51.227 19:53:35 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.227 19:53:35 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:51.485 19:53:35 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.485 19:53:35 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.486 19:53:35 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:51.486 19:53:35 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.486 19:53:35 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.486 --rc genhtml_branch_coverage=1 00:09:51.486 --rc genhtml_function_coverage=1 00:09:51.486 --rc genhtml_legend=1 00:09:51.486 --rc geninfo_all_blocks=1 00:09:51.486 --rc geninfo_unexecuted_blocks=1 00:09:51.486 00:09:51.486 ' 00:09:51.486 19:53:35 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.486 --rc genhtml_branch_coverage=1 00:09:51.486 --rc genhtml_function_coverage=1 00:09:51.486 --rc genhtml_legend=1 00:09:51.486 --rc geninfo_all_blocks=1 00:09:51.486 --rc geninfo_unexecuted_blocks=1 00:09:51.486 00:09:51.486 ' 00:09:51.486 19:53:35 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.486 --rc genhtml_branch_coverage=1 00:09:51.486 --rc genhtml_function_coverage=1 00:09:51.486 --rc genhtml_legend=1 00:09:51.486 --rc geninfo_all_blocks=1 00:09:51.486 --rc geninfo_unexecuted_blocks=1 00:09:51.486 00:09:51.486 ' 00:09:51.486 19:53:35 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.486 --rc genhtml_branch_coverage=1 00:09:51.486 --rc genhtml_function_coverage=1 00:09:51.486 --rc genhtml_legend=1 00:09:51.486 --rc geninfo_all_blocks=1 00:09:51.486 --rc geninfo_unexecuted_blocks=1 00:09:51.486 00:09:51.486 ' 00:09:51.486 19:53:35 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.744 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.744 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.744 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.744 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.744 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:51.744 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:51.744 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:51.744 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:51.744 19:53:36 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:51.744 19:53:36 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:51.744 19:53:36 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:52.003 19:53:36 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.003 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:52.003 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:52.003 19:53:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:52.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.261 Waiting for block devices as requested 00:09:52.519 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.519 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.519 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.519 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.806 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:57.806 19:53:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:57.806 19:53:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:58.066 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:58.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.066 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:58.326 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:58.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.587 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:58.587 19:53:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67017 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:58.848 19:53:42 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:58.848 19:53:42 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:58.848 19:53:42 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:58.848 19:53:42 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:58.848 19:53:42 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:58.848 19:53:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:58.848 Initializing NVMe Controllers 00:09:58.848 Attaching to 0000:00:10.0 00:09:58.848 Attaching to 0000:00:11.0 00:09:58.848 Attached to 0000:00:11.0 00:09:58.848 Attached to 0000:00:10.0 00:09:58.848 Initialization complete. Starting I/O... 00:09:58.848 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:58.848 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:58.848 00:09:59.789 QEMU NVMe Ctrl (12341 ): 2569 I/Os completed (+2569) 00:09:59.789 QEMU NVMe Ctrl (12340 ): 2528 I/Os completed (+2528) 00:09:59.789 00:10:01.175 QEMU NVMe Ctrl (12341 ): 5777 I/Os completed (+3208) 00:10:01.175 QEMU NVMe Ctrl (12340 ): 5751 I/Os completed (+3223) 00:10:01.175 00:10:02.117 QEMU NVMe Ctrl (12341 ): 8979 I/Os completed (+3202) 00:10:02.117 QEMU NVMe Ctrl (12340 ): 8932 I/Os completed (+3181) 00:10:02.117 00:10:03.061 QEMU NVMe Ctrl (12341 ): 12141 I/Os completed (+3162) 00:10:03.061 QEMU NVMe Ctrl (12340 ): 12063 I/Os completed (+3131) 00:10:03.061 00:10:04.024 QEMU NVMe Ctrl (12341 ): 15234 I/Os completed (+3093) 00:10:04.024 QEMU NVMe Ctrl (12340 ): 15180 I/Os completed (+3117) 00:10:04.024 00:10:04.616 19:53:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.616 19:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.616 19:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.616 [2024-09-30 19:53:48.969782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:04.616 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:04.617 [2024-09-30 19:53:48.971675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.971833] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.971858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.971876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:04.617 [2024-09-30 19:53:48.973789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.973892] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.973984] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.617 [2024-09-30 19:53:48.974017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 19:53:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.876 19:53:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.876 [2024-09-30 19:53:48.994262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:04.876 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:04.876 [2024-09-30 19:53:48.995372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.995406] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.995427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.995441] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:04.876 [2024-09-30 19:53:48.998602] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.998639] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.998653] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 [2024-09-30 19:53:48.998666] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.876 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:04.876 EAL: Scan for (pci) bus failed. 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.876 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.876 Attaching to 0000:00:10.0 00:10:04.876 Attached to 0000:00:10.0 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.876 19:53:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.876 Attaching to 0000:00:11.0 00:10:05.137 Attached to 0000:00:11.0 00:10:06.080 QEMU NVMe Ctrl (12340 ): 3601 I/Os completed (+3601) 00:10:06.080 QEMU NVMe Ctrl (12341 ): 3313 I/Os completed (+3313) 00:10:06.080 00:10:07.023 QEMU NVMe Ctrl (12340 ): 6671 I/Os completed (+3070) 00:10:07.023 QEMU NVMe Ctrl (12341 ): 6377 I/Os completed (+3064) 00:10:07.023 00:10:07.958 QEMU NVMe Ctrl (12340 ): 10110 I/Os completed (+3439) 00:10:07.958 QEMU NVMe Ctrl (12341 ): 9800 I/Os completed (+3423) 00:10:07.958 00:10:08.893 QEMU NVMe Ctrl (12340 ): 13799 I/Os completed (+3689) 00:10:08.893 QEMU NVMe Ctrl (12341 ): 13493 I/Os completed (+3693) 00:10:08.893 00:10:09.827 QEMU NVMe Ctrl (12340 ): 17493 I/Os completed (+3694) 00:10:09.827 QEMU NVMe Ctrl (12341 ): 17177 I/Os completed (+3684) 00:10:09.827 00:10:11.201 QEMU NVMe Ctrl (12340 ): 21155 I/Os completed (+3662) 00:10:11.201 QEMU NVMe Ctrl (12341 ): 20840 I/Os completed (+3663) 00:10:11.201 00:10:12.135 QEMU NVMe Ctrl (12340 ): 24821 I/Os completed (+3666) 00:10:12.135 QEMU NVMe Ctrl (12341 ): 24509 I/Os completed (+3669) 00:10:12.135 00:10:13.075 QEMU NVMe Ctrl (12340 ): 28473 I/Os completed (+3652) 00:10:13.075 QEMU NVMe Ctrl (12341 ): 28185 I/Os completed (+3676) 00:10:13.075 00:10:14.019 QEMU NVMe Ctrl (12340 ): 32156 I/Os completed (+3683) 00:10:14.019 QEMU NVMe Ctrl (12341 ): 31871 I/Os completed (+3686) 00:10:14.019 00:10:14.959 QEMU NVMe Ctrl (12340 ): 35649 I/Os completed (+3493) 00:10:14.959 QEMU NVMe Ctrl (12341 ): 35291 I/Os completed (+3420) 00:10:14.959 00:10:15.899 QEMU NVMe Ctrl (12340 ): 38736 I/Os completed (+3087) 00:10:15.899 QEMU NVMe Ctrl (12341 ): 38472 I/Os completed (+3181) 00:10:15.899 00:10:16.839 QEMU NVMe Ctrl (12340 ): 42061 I/Os completed (+3325) 00:10:16.839 QEMU NVMe Ctrl (12341 ): 41811 I/Os completed (+3339) 00:10:16.839 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.099 [2024-09-30 19:54:01.246622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:17.099 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:17.099 [2024-09-30 19:54:01.247593] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.247708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.247738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.247797] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:17.099 [2024-09-30 19:54:01.249433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.249558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.249587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.249734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.099 EAL: Scan for (pci) bus failed. 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.099 [2024-09-30 19:54:01.265816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:17.099 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:17.099 [2024-09-30 19:54:01.266721] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.266811] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.266844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.266904] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:17.099 [2024-09-30 19:54:01.268414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.268507] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.268566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 [2024-09-30 19:54:01.268591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.099 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:17.099 Attaching to 0000:00:10.0 00:10:17.099 Attached to 0000:00:10.0 00:10:17.360 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:17.360 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.360 19:54:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:17.360 Attaching to 0000:00:11.0 00:10:17.360 Attached to 0000:00:11.0 00:10:17.930 QEMU NVMe Ctrl (12340 ): 2710 I/Os completed (+2710) 00:10:17.930 QEMU NVMe Ctrl (12341 ): 2401 I/Os completed (+2401) 00:10:17.930 00:10:18.875 QEMU NVMe Ctrl (12340 ): 6412 I/Os completed (+3702) 00:10:18.875 QEMU NVMe Ctrl (12341 ): 6105 I/Os completed (+3704) 00:10:18.875 00:10:19.824 QEMU NVMe Ctrl (12340 ): 10123 I/Os completed (+3711) 00:10:19.824 QEMU NVMe Ctrl (12341 ): 9806 I/Os completed (+3701) 00:10:19.824 00:10:21.212 QEMU NVMe Ctrl (12340 ): 13830 I/Os completed (+3707) 00:10:21.212 QEMU NVMe Ctrl (12341 ): 13503 I/Os completed (+3697) 00:10:21.212 00:10:22.150 QEMU NVMe Ctrl (12340 ): 17685 I/Os completed (+3855) 00:10:22.150 QEMU NVMe Ctrl (12341 ): 17359 I/Os completed (+3856) 00:10:22.150 00:10:23.092 QEMU NVMe Ctrl (12340 ): 21398 I/Os completed (+3713) 00:10:23.092 QEMU NVMe Ctrl (12341 ): 21080 I/Os completed (+3721) 00:10:23.092 00:10:24.030 QEMU NVMe Ctrl (12340 ): 24863 I/Os completed (+3465) 00:10:24.030 QEMU NVMe Ctrl (12341 ): 24590 I/Os completed (+3510) 00:10:24.030 00:10:24.962 QEMU NVMe Ctrl (12340 ): 28269 I/Os completed (+3406) 00:10:24.962 QEMU NVMe Ctrl (12341 ): 28008 I/Os completed (+3418) 00:10:24.962 00:10:25.894 QEMU NVMe Ctrl (12340 ): 31985 I/Os completed (+3716) 00:10:25.894 QEMU NVMe Ctrl (12341 ): 31601 I/Os completed (+3593) 00:10:25.894 00:10:26.827 QEMU NVMe Ctrl (12340 ): 35252 I/Os completed (+3267) 00:10:26.827 QEMU NVMe Ctrl (12341 ): 34823 I/Os completed (+3222) 00:10:26.827 00:10:28.202 QEMU NVMe Ctrl (12340 ): 38826 I/Os completed (+3574) 00:10:28.202 QEMU NVMe Ctrl (12341 ): 38432 I/Os completed (+3609) 00:10:28.202 00:10:29.144 QEMU NVMe Ctrl (12340 ): 42092 I/Os completed (+3266) 00:10:29.144 QEMU NVMe Ctrl (12341 ): 41703 I/Os completed (+3271) 00:10:29.144 00:10:29.144 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:29.144 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:29.144 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.144 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.144 [2024-09-30 19:54:13.506834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:29.144 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:29.144 [2024-09-30 19:54:13.507803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.144 [2024-09-30 19:54:13.507914] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.144 [2024-09-30 19:54:13.507946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.144 [2024-09-30 19:54:13.508003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.405 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.405 [2024-09-30 19:54:13.509594] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.405 [2024-09-30 19:54:13.509687] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.405 [2024-09-30 19:54:13.509717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.405 [2024-09-30 19:54:13.509738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.405 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.406 [2024-09-30 19:54:13.528921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:29.406 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:29.406 [2024-09-30 19:54:13.529871] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.529924] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.529950] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.530011] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.406 [2024-09-30 19:54:13.531378] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.531456] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.531484] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 [2024-09-30 19:54:13.531533] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:29.406 Attaching to 0000:00:10.0 00:10:29.406 Attached to 0000:00:10.0 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.406 19:54:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:29.406 Attaching to 0000:00:11.0 00:10:29.406 Attached to 0000:00:11.0 00:10:29.406 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.406 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.406 [2024-09-30 19:54:13.762819] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:41.652 19:54:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:41.652 19:54:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.652 19:54:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.79 00:10:41.652 19:54:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.79 00:10:41.652 19:54:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:41.652 19:54:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.79 00:10:41.652 19:54:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.79 2 00:10:41.652 remove_attach_helper took 42.79s to complete (handling 2 nvme drive(s)) 19:54:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67017 00:10:48.233 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67017) - No such process 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67017 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67565 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:48.233 19:54:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67565 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67565 ']' 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.233 19:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.233 [2024-09-30 19:54:31.843916] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:48.233 [2024-09-30 19:54:31.844199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67565 ] 00:10:48.233 [2024-09-30 19:54:31.994304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.233 [2024-09-30 19:54:32.169907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:48.491 19:54:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.491 19:54:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.070 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.071 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.071 19:54:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.071 19:54:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.071 19:54:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.071 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:55.071 19:54:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.071 [2024-09-30 19:54:38.851578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:55.071 [2024-09-30 19:54:38.853198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:38.853246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:38.853261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:38.853298] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:38.853309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:38.853320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:38.853330] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:38.853341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:38.853349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:38.853365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:38.853373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:38.853384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.071 19:54:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.071 19:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.071 [2024-09-30 19:54:39.351559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:55.071 [2024-09-30 19:54:39.352832] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:39.352991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:39.353010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:39.353025] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:39.353035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:39.353042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:39.353051] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:39.353058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:39.353066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 [2024-09-30 19:54:39.353074] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.071 [2024-09-30 19:54:39.353082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.071 [2024-09-30 19:54:39.353089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.071 19:54:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:55.071 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.637 19:54:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.637 19:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.637 19:54:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.637 19:54:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.895 19:54:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.101 19:54:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.101 [2024-09-30 19:54:52.251818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:08.101 [2024-09-30 19:54:52.253114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.101 [2024-09-30 19:54:52.253152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.101 [2024-09-30 19:54:52.253165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.101 [2024-09-30 19:54:52.253185] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.101 [2024-09-30 19:54:52.253193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.101 [2024-09-30 19:54:52.253202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.101 [2024-09-30 19:54:52.253209] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.101 [2024-09-30 19:54:52.253217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.101 [2024-09-30 19:54:52.253224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.101 [2024-09-30 19:54:52.253233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.101 [2024-09-30 19:54:52.253239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.101 [2024-09-30 19:54:52.253247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:08.101 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.668 19:54:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.668 19:54:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.668 19:54:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:08.668 19:54:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.668 [2024-09-30 19:54:52.851810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:08.668 [2024-09-30 19:54:52.853112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.668 [2024-09-30 19:54:52.853144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.668 [2024-09-30 19:54:52.853157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.668 [2024-09-30 19:54:52.853170] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.668 [2024-09-30 19:54:52.853178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.668 [2024-09-30 19:54:52.853186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.668 [2024-09-30 19:54:52.853195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.668 [2024-09-30 19:54:52.853202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.668 [2024-09-30 19:54:52.853210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.668 [2024-09-30 19:54:52.853218] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.668 [2024-09-30 19:54:52.853225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.668 [2024-09-30 19:54:52.853232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.927 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:08.927 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.185 19:54:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.185 19:54:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.185 19:54:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.185 19:54:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.383 19:55:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:21.383 19:55:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.383 [2024-09-30 19:55:05.652054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:21.383 [2024-09-30 19:55:05.653363] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.383 [2024-09-30 19:55:05.653407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.383 [2024-09-30 19:55:05.653420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.383 [2024-09-30 19:55:05.653442] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.383 [2024-09-30 19:55:05.653449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.383 [2024-09-30 19:55:05.653460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.383 [2024-09-30 19:55:05.653468] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.383 [2024-09-30 19:55:05.653477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.383 [2024-09-30 19:55:05.653483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.383 [2024-09-30 19:55:05.653492] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.383 [2024-09-30 19:55:05.653498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.383 [2024-09-30 19:55:05.653506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.950 [2024-09-30 19:55:06.152053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:21.950 [2024-09-30 19:55:06.153260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.950 [2024-09-30 19:55:06.153303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.950 [2024-09-30 19:55:06.153316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.950 [2024-09-30 19:55:06.153328] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.950 [2024-09-30 19:55:06.153336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.950 [2024-09-30 19:55:06.153343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.950 [2024-09-30 19:55:06.153352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.950 [2024-09-30 19:55:06.153359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.950 [2024-09-30 19:55:06.153370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.950 [2024-09-30 19:55:06.153377] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.950 [2024-09-30 19:55:06.153385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.950 [2024-09-30 19:55:06.153391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.950 19:55:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.950 19:55:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.950 19:55:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.950 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.207 19:55:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.74 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.74 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:11:34.439 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:34.439 19:55:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:34.439 19:55:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.026 19:55:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.026 19:55:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 19:55:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:41.026 19:55:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.026 [2024-09-30 19:55:24.624259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:41.026 [2024-09-30 19:55:24.625253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:24.625298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:24.625309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:24.625331] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:24.625339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:24.625347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:24.625356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:24.625364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:24.625371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:24.625380] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:24.625387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:24.625397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.026 19:55:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.026 19:55:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 19:55:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:41.026 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.026 [2024-09-30 19:55:25.224265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:41.026 [2024-09-30 19:55:25.225189] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:25.225216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:25.225227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:25.225238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:25.225247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:25.225255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:25.225264] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:25.225281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:25.225290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.026 [2024-09-30 19:55:25.225297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.026 [2024-09-30 19:55:25.225307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.026 [2024-09-30 19:55:25.225313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.593 19:55:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.593 19:55:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.593 19:55:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.593 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:41.851 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:41.851 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.851 19:55:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.101 19:55:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.101 19:55:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.101 19:55:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.101 19:55:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.101 [2024-09-30 19:55:38.024507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.101 [2024-09-30 19:55:38.025620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.101 [2024-09-30 19:55:38.025654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.101 [2024-09-30 19:55:38.025666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.101 [2024-09-30 19:55:38.025684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.101 [2024-09-30 19:55:38.025691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.101 [2024-09-30 19:55:38.025700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.101 [2024-09-30 19:55:38.025707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.101 [2024-09-30 19:55:38.025716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.101 [2024-09-30 19:55:38.025723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.101 [2024-09-30 19:55:38.025732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.101 [2024-09-30 19:55:38.025739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.101 [2024-09-30 19:55:38.025747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.101 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.102 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.102 19:55:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.102 19:55:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.102 19:55:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.102 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:54.102 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.360 [2024-09-30 19:55:38.524503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:54.360 [2024-09-30 19:55:38.525426] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.360 [2024-09-30 19:55:38.525453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.360 [2024-09-30 19:55:38.525465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.360 [2024-09-30 19:55:38.525476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.360 [2024-09-30 19:55:38.525486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.360 [2024-09-30 19:55:38.525494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.360 [2024-09-30 19:55:38.525503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.360 [2024-09-30 19:55:38.525510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.360 [2024-09-30 19:55:38.525519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.360 [2024-09-30 19:55:38.525527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.360 [2024-09-30 19:55:38.525535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.360 [2024-09-30 19:55:38.525541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.360 19:55:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.360 19:55:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.360 19:55:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:54.360 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.619 19:55:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.817 19:55:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.817 19:55:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.817 [2024-09-30 19:55:51.024733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:06.817 [2024-09-30 19:55:51.025705] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.817 [2024-09-30 19:55:51.025742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.817 [2024-09-30 19:55:51.025755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.817 [2024-09-30 19:55:51.025775] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.817 [2024-09-30 19:55:51.025782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.817 [2024-09-30 19:55:51.025791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.817 [2024-09-30 19:55:51.025801] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.817 [2024-09-30 19:55:51.025813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.817 [2024-09-30 19:55:51.025821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.817 [2024-09-30 19:55:51.025830] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.817 [2024-09-30 19:55:51.025836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.817 [2024-09-30 19:55:51.025845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.075 [2024-09-30 19:55:51.424728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:07.075 [2024-09-30 19:55:51.425662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.075 [2024-09-30 19:55:51.425690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.075 [2024-09-30 19:55:51.425701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.075 [2024-09-30 19:55:51.425711] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.075 [2024-09-30 19:55:51.425721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.075 [2024-09-30 19:55:51.425728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.075 [2024-09-30 19:55:51.425738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.075 [2024-09-30 19:55:51.425745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.075 [2024-09-30 19:55:51.425753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.075 [2024-09-30 19:55:51.425760] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.075 [2024-09-30 19:55:51.425771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.075 [2024-09-30 19:55:51.425778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.334 19:55:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.334 19:55:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.334 19:55:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.334 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.592 19:55:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.32 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.32 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.32 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.32 2 00:12:19.788 remove_attach_helper took 45.32s to complete (handling 2 nvme drive(s)) 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:19.788 19:56:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67565 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67565 ']' 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67565 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67565 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.788 killing process with pid 67565 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67565' 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67565 00:12:19.788 19:56:03 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67565 00:12:21.177 19:56:05 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:21.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.697 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.697 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:21.957 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.957 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.957 00:12:21.958 real 2m30.723s 00:12:21.958 user 1m52.759s 00:12:21.958 sys 0m16.636s 00:12:21.958 ************************************ 00:12:21.958 END TEST sw_hotplug 00:12:21.958 ************************************ 00:12:21.958 19:56:06 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.958 19:56:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.958 19:56:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:21.958 19:56:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:21.958 19:56:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.958 19:56:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.958 19:56:06 -- common/autotest_common.sh@10 -- # set +x 00:12:21.958 ************************************ 00:12:21.958 START TEST nvme_xnvme 00:12:21.958 ************************************ 00:12:21.958 19:56:06 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:22.220 * Looking for test storage... 00:12:22.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:22.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.220 --rc genhtml_branch_coverage=1 00:12:22.220 --rc genhtml_function_coverage=1 00:12:22.220 --rc genhtml_legend=1 00:12:22.220 --rc geninfo_all_blocks=1 00:12:22.220 --rc geninfo_unexecuted_blocks=1 00:12:22.220 00:12:22.220 ' 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:22.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.220 --rc genhtml_branch_coverage=1 00:12:22.220 --rc genhtml_function_coverage=1 00:12:22.220 --rc genhtml_legend=1 00:12:22.220 --rc geninfo_all_blocks=1 00:12:22.220 --rc geninfo_unexecuted_blocks=1 00:12:22.220 00:12:22.220 ' 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:22.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.220 --rc genhtml_branch_coverage=1 00:12:22.220 --rc genhtml_function_coverage=1 00:12:22.220 --rc genhtml_legend=1 00:12:22.220 --rc geninfo_all_blocks=1 00:12:22.220 --rc geninfo_unexecuted_blocks=1 00:12:22.220 00:12:22.220 ' 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:22.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.220 --rc genhtml_branch_coverage=1 00:12:22.220 --rc genhtml_function_coverage=1 00:12:22.220 --rc genhtml_legend=1 00:12:22.220 --rc geninfo_all_blocks=1 00:12:22.220 --rc geninfo_unexecuted_blocks=1 00:12:22.220 00:12:22.220 ' 00:12:22.220 19:56:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.220 19:56:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.220 19:56:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.220 19:56:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.220 19:56:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.220 19:56:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:22.220 19:56:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.220 19:56:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:22.220 19:56:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:22.221 19:56:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.221 19:56:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.221 ************************************ 00:12:22.221 START TEST xnvme_to_malloc_dd_copy 00:12:22.221 ************************************ 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:22.221 19:56:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:22.221 { 00:12:22.221 "subsystems": [ 00:12:22.221 { 00:12:22.221 "subsystem": "bdev", 00:12:22.221 "config": [ 00:12:22.221 { 00:12:22.221 "params": { 00:12:22.221 "block_size": 512, 00:12:22.221 "num_blocks": 2097152, 00:12:22.221 "name": "malloc0" 00:12:22.221 }, 00:12:22.221 "method": "bdev_malloc_create" 00:12:22.221 }, 00:12:22.221 { 00:12:22.221 "params": { 00:12:22.221 "io_mechanism": "libaio", 00:12:22.221 "filename": "/dev/nullb0", 00:12:22.221 "name": "null0" 00:12:22.221 }, 00:12:22.221 "method": "bdev_xnvme_create" 00:12:22.221 }, 00:12:22.221 { 00:12:22.221 "method": "bdev_wait_for_examine" 00:12:22.221 } 00:12:22.221 ] 00:12:22.221 } 00:12:22.221 ] 00:12:22.221 } 00:12:22.221 [2024-09-30 19:56:06.582780] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:22.221 [2024-09-30 19:56:06.582947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68957 ] 00:12:22.483 [2024-09-30 19:56:06.739361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.744 [2024-09-30 19:56:07.012359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.123  Copying: 221/1024 [MB] (221 MBps) Copying: 463/1024 [MB] (241 MBps) Copying: 761/1024 [MB] (298 MBps) Copying: 1024/1024 [MB] (average 264 MBps) 00:12:30.123 00:12:30.123 19:56:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:30.123 19:56:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:30.123 19:56:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:30.123 19:56:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.123 { 00:12:30.123 "subsystems": [ 00:12:30.123 { 00:12:30.123 "subsystem": "bdev", 00:12:30.123 "config": [ 00:12:30.123 { 00:12:30.123 "params": { 00:12:30.123 "block_size": 512, 00:12:30.123 "num_blocks": 2097152, 00:12:30.123 "name": "malloc0" 00:12:30.123 }, 00:12:30.123 "method": "bdev_malloc_create" 00:12:30.123 }, 00:12:30.123 { 00:12:30.123 "params": { 00:12:30.123 "io_mechanism": "libaio", 00:12:30.123 "filename": "/dev/nullb0", 00:12:30.123 "name": "null0" 00:12:30.123 }, 00:12:30.123 "method": "bdev_xnvme_create" 00:12:30.123 }, 00:12:30.123 { 00:12:30.123 "method": "bdev_wait_for_examine" 00:12:30.123 } 00:12:30.123 ] 00:12:30.123 } 00:12:30.123 ] 00:12:30.123 } 00:12:30.123 [2024-09-30 19:56:14.381360] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:30.123 [2024-09-30 19:56:14.381481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69051 ] 00:12:30.403 [2024-09-30 19:56:14.532111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.403 [2024-09-30 19:56:14.717260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.148  Copying: 300/1024 [MB] (300 MBps) Copying: 601/1024 [MB] (301 MBps) Copying: 902/1024 [MB] (301 MBps) Copying: 1024/1024 [MB] (average 300 MBps) 00:12:37.148 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:37.148 19:56:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.148 { 00:12:37.148 "subsystems": [ 00:12:37.148 { 00:12:37.148 "subsystem": "bdev", 00:12:37.148 "config": [ 00:12:37.148 { 00:12:37.148 "params": { 00:12:37.148 "block_size": 512, 00:12:37.148 "num_blocks": 2097152, 00:12:37.148 "name": "malloc0" 00:12:37.148 }, 00:12:37.148 "method": "bdev_malloc_create" 00:12:37.148 }, 00:12:37.148 { 00:12:37.148 "params": { 00:12:37.148 "io_mechanism": "io_uring", 00:12:37.148 "filename": "/dev/nullb0", 00:12:37.148 "name": "null0" 00:12:37.148 }, 00:12:37.148 "method": "bdev_xnvme_create" 00:12:37.148 }, 00:12:37.148 { 00:12:37.148 "method": "bdev_wait_for_examine" 00:12:37.148 } 00:12:37.148 ] 00:12:37.148 } 00:12:37.148 ] 00:12:37.148 } 00:12:37.148 [2024-09-30 19:56:21.196991] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:37.148 [2024-09-30 19:56:21.197113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69133 ] 00:12:37.148 [2024-09-30 19:56:21.347558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.407 [2024-09-30 19:56:21.516398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.963  Copying: 309/1024 [MB] (309 MBps) Copying: 618/1024 [MB] (309 MBps) Copying: 927/1024 [MB] (309 MBps) Copying: 1024/1024 [MB] (average 309 MBps) 00:12:43.963 00:12:43.963 19:56:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:43.963 19:56:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.963 19:56:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:43.963 19:56:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.963 { 00:12:43.963 "subsystems": [ 00:12:43.963 { 00:12:43.963 "subsystem": "bdev", 00:12:43.963 "config": [ 00:12:43.963 { 00:12:43.963 "params": { 00:12:43.963 "block_size": 512, 00:12:43.963 "num_blocks": 2097152, 00:12:43.963 "name": "malloc0" 00:12:43.963 }, 00:12:43.963 "method": "bdev_malloc_create" 00:12:43.963 }, 00:12:43.963 { 00:12:43.963 "params": { 00:12:43.963 "io_mechanism": "io_uring", 00:12:43.963 "filename": "/dev/nullb0", 00:12:43.963 "name": "null0" 00:12:43.963 }, 00:12:43.963 "method": "bdev_xnvme_create" 00:12:43.963 }, 00:12:43.963 { 00:12:43.963 "method": "bdev_wait_for_examine" 00:12:43.963 } 00:12:43.963 ] 00:12:43.963 } 00:12:43.963 ] 00:12:43.963 } 00:12:43.963 [2024-09-30 19:56:27.824146] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:43.963 [2024-09-30 19:56:27.824283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69214 ] 00:12:43.963 [2024-09-30 19:56:27.972738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.963 [2024-09-30 19:56:28.144869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.514  Copying: 310/1024 [MB] (310 MBps) Copying: 620/1024 [MB] (310 MBps) Copying: 931/1024 [MB] (310 MBps) Copying: 1024/1024 [MB] (average 310 MBps) 00:12:50.514 00:12:50.514 19:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:50.514 19:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:50.514 00:12:50.514 real 0m27.919s 00:12:50.514 user 0m24.178s 00:12:50.514 sys 0m3.191s 00:12:50.514 19:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.514 19:56:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:50.514 ************************************ 00:12:50.514 END TEST xnvme_to_malloc_dd_copy 00:12:50.514 ************************************ 00:12:50.514 19:56:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:50.514 19:56:34 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:50.514 19:56:34 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.514 19:56:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.514 ************************************ 00:12:50.514 START TEST xnvme_bdevperf 00:12:50.514 ************************************ 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:50.514 19:56:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.514 { 00:12:50.514 "subsystems": [ 00:12:50.514 { 00:12:50.514 "subsystem": "bdev", 00:12:50.514 "config": [ 00:12:50.514 { 00:12:50.514 "params": { 00:12:50.514 "io_mechanism": "libaio", 00:12:50.514 "filename": "/dev/nullb0", 00:12:50.514 "name": "null0" 00:12:50.514 }, 00:12:50.514 "method": "bdev_xnvme_create" 00:12:50.514 }, 00:12:50.514 { 00:12:50.514 "method": "bdev_wait_for_examine" 00:12:50.514 } 00:12:50.514 ] 00:12:50.514 } 00:12:50.514 ] 00:12:50.514 } 00:12:50.514 [2024-09-30 19:56:34.512744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:50.514 [2024-09-30 19:56:34.512856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69319 ] 00:12:50.514 [2024-09-30 19:56:34.662465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.514 [2024-09-30 19:56:34.866911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.776 Running I/O for 5 seconds... 00:12:55.937 155968.00 IOPS, 609.25 MiB/s 176000.00 IOPS, 687.50 MiB/s 185045.33 IOPS, 722.83 MiB/s 189888.00 IOPS, 741.75 MiB/s 192729.60 IOPS, 752.85 MiB/s 00:12:55.937 Latency(us) 00:12:55.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.937 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:55.937 null0 : 5.00 192668.42 752.61 0.00 0.00 329.89 305.62 2886.10 00:12:55.937 =================================================================================================================== 00:12:55.937 Total : 192668.42 752.61 0.00 0.00 329.89 305.62 2886.10 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:56.506 19:56:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.506 { 00:12:56.506 "subsystems": [ 00:12:56.506 { 00:12:56.506 "subsystem": "bdev", 00:12:56.506 "config": [ 00:12:56.506 { 00:12:56.506 "params": { 00:12:56.506 "io_mechanism": "io_uring", 00:12:56.506 "filename": "/dev/nullb0", 00:12:56.506 "name": "null0" 00:12:56.506 }, 00:12:56.506 "method": "bdev_xnvme_create" 00:12:56.506 }, 00:12:56.506 { 00:12:56.506 "method": "bdev_wait_for_examine" 00:12:56.506 } 00:12:56.506 ] 00:12:56.506 } 00:12:56.506 ] 00:12:56.506 } 00:12:56.765 [2024-09-30 19:56:40.897368] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:56.766 [2024-09-30 19:56:40.897484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69394 ] 00:12:56.766 [2024-09-30 19:56:41.048464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.024 [2024-09-30 19:56:41.207833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.282 Running I/O for 5 seconds... 00:13:02.392 232128.00 IOPS, 906.75 MiB/s 232000.00 IOPS, 906.25 MiB/s 231957.33 IOPS, 906.08 MiB/s 231952.00 IOPS, 906.06 MiB/s 00:13:02.392 Latency(us) 00:13:02.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.392 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:02.392 null0 : 5.00 231879.37 905.78 0.00 0.00 273.96 151.24 1512.37 00:13:02.392 =================================================================================================================== 00:13:02.392 Total : 231879.37 905.78 0.00 0.00 273.96 151.24 1512.37 00:13:02.961 19:56:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:02.961 19:56:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:02.961 00:13:02.961 real 0m12.719s 00:13:02.961 user 0m10.296s 00:13:02.961 sys 0m2.176s 00:13:02.961 19:56:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.961 ************************************ 00:13:02.961 END TEST xnvme_bdevperf 00:13:02.961 ************************************ 00:13:02.961 19:56:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.961 00:13:02.961 real 0m40.892s 00:13:02.961 user 0m34.592s 00:13:02.961 sys 0m5.493s 00:13:02.961 19:56:47 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.961 19:56:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.961 ************************************ 00:13:02.961 END TEST nvme_xnvme 00:13:02.961 ************************************ 00:13:02.961 19:56:47 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:02.961 19:56:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:02.961 19:56:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.961 19:56:47 -- common/autotest_common.sh@10 -- # set +x 00:13:02.961 ************************************ 00:13:02.961 START TEST blockdev_xnvme 00:13:02.961 ************************************ 00:13:02.961 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:02.961 * Looking for test storage... 00:13:02.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:02.961 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.223 19:56:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.223 --rc genhtml_branch_coverage=1 00:13:03.223 --rc genhtml_function_coverage=1 00:13:03.223 --rc genhtml_legend=1 00:13:03.223 --rc geninfo_all_blocks=1 00:13:03.223 --rc geninfo_unexecuted_blocks=1 00:13:03.223 00:13:03.223 ' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.223 --rc genhtml_branch_coverage=1 00:13:03.223 --rc genhtml_function_coverage=1 00:13:03.223 --rc genhtml_legend=1 00:13:03.223 --rc geninfo_all_blocks=1 00:13:03.223 --rc geninfo_unexecuted_blocks=1 00:13:03.223 00:13:03.223 ' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.223 --rc genhtml_branch_coverage=1 00:13:03.223 --rc genhtml_function_coverage=1 00:13:03.223 --rc genhtml_legend=1 00:13:03.223 --rc geninfo_all_blocks=1 00:13:03.223 --rc geninfo_unexecuted_blocks=1 00:13:03.223 00:13:03.223 ' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.223 --rc genhtml_branch_coverage=1 00:13:03.223 --rc genhtml_function_coverage=1 00:13:03.223 --rc genhtml_legend=1 00:13:03.223 --rc geninfo_all_blocks=1 00:13:03.223 --rc geninfo_unexecuted_blocks=1 00:13:03.223 00:13:03.223 ' 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69536 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69536 00:13:03.223 19:56:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69536 ']' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.223 19:56:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.223 [2024-09-30 19:56:47.506072] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:03.223 [2024-09-30 19:56:47.506247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69536 ] 00:13:03.483 [2024-09-30 19:56:47.671922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.483 [2024-09-30 19:56:47.836246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.049 19:56:48 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.049 19:56:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:04.049 19:56:48 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:04.049 19:56:48 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:04.049 19:56:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:04.049 19:56:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:04.049 19:56:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:04.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:04.566 Waiting for block devices as requested 00:13:04.566 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.825 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.825 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:04.825 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:10.093 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:10.093 nvme0n1 00:13:10.093 nvme1n1 00:13:10.093 nvme2n1 00:13:10.093 nvme2n2 00:13:10.093 nvme2n3 00:13:10.093 nvme3n1 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:10.093 19:56:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:10.093 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:10.094 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c8c15684-1615-4c6c-85bb-4e5f65813d94"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c8c15684-1615-4c6c-85bb-4e5f65813d94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c5da27ec-d8ff-4cde-a130-0e3dacd6c69a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c5da27ec-d8ff-4cde-a130-0e3dacd6c69a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb53eedd-15ea-4605-b32a-27832910e60f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb53eedd-15ea-4605-b32a-27832910e60f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f6e8adea-6986-426d-9f71-4454333b4694"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f6e8adea-6986-426d-9f71-4454333b4694",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "2315cde6-1279-4863-871b-c7d21f130047"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2315cde6-1279-4863-871b-c7d21f130047",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c3ebc461-496c-46b7-b60a-9edf2977ebd5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c3ebc461-496c-46b7-b60a-9edf2977ebd5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:10.094 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:10.094 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:10.094 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:10.094 19:56:54 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69536 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69536 ']' 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69536 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69536 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.094 killing process with pid 69536 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69536' 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69536 00:13:10.094 19:56:54 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69536 00:13:11.472 19:56:55 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.472 19:56:55 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.472 19:56:55 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:11.472 19:56:55 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.472 19:56:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.472 ************************************ 00:13:11.472 START TEST bdev_hello_world 00:13:11.472 ************************************ 00:13:11.472 19:56:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.472 [2024-09-30 19:56:55.826098] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:11.472 [2024-09-30 19:56:55.826212] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:13:11.733 [2024-09-30 19:56:55.975486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.993 [2024-09-30 19:56:56.213921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.564 [2024-09-30 19:56:56.630963] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:12.564 [2024-09-30 19:56:56.631040] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:12.564 [2024-09-30 19:56:56.631059] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:12.564 [2024-09-30 19:56:56.633430] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:12.564 [2024-09-30 19:56:56.633999] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:12.564 [2024-09-30 19:56:56.634036] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:12.564 [2024-09-30 19:56:56.634733] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:12.564 00:13:12.564 [2024-09-30 19:56:56.634772] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:13.133 00:13:13.133 real 0m1.590s 00:13:13.133 user 0m1.204s 00:13:13.133 sys 0m0.258s 00:13:13.133 19:56:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.133 ************************************ 00:13:13.133 END TEST bdev_hello_world 00:13:13.133 ************************************ 00:13:13.133 19:56:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:13.133 19:56:57 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:13.133 19:56:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.133 19:56:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.133 19:56:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.133 ************************************ 00:13:13.133 START TEST bdev_bounds 00:13:13.133 ************************************ 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:13.133 Process bdevio pid: 69932 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69932 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69932' 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69932 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69932 ']' 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.133 19:56:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:13.133 [2024-09-30 19:56:57.487503] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:13.133 [2024-09-30 19:56:57.487635] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69932 ] 00:13:13.393 [2024-09-30 19:56:57.637020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.652 [2024-09-30 19:56:57.807189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.652 [2024-09-30 19:56:57.807439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.652 [2024-09-30 19:56:57.807449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.219 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.219 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:14.219 19:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:14.219 I/O targets: 00:13:14.219 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:14.219 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:14.219 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:14.219 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:14.219 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:14.219 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:14.219 00:13:14.219 00:13:14.219 CUnit - A unit testing framework for C - Version 2.1-3 00:13:14.219 http://cunit.sourceforge.net/ 00:13:14.219 00:13:14.219 00:13:14.219 Suite: bdevio tests on: nvme3n1 00:13:14.219 Test: blockdev write read block ...passed 00:13:14.219 Test: blockdev write zeroes read block ...passed 00:13:14.219 Test: blockdev write zeroes read no split ...passed 00:13:14.219 Test: blockdev write zeroes read split ...passed 00:13:14.219 Test: blockdev write zeroes read split partial ...passed 00:13:14.219 Test: blockdev reset ...passed 00:13:14.219 Test: blockdev write read 8 blocks ...passed 00:13:14.219 Test: blockdev write read size > 128k ...passed 00:13:14.219 Test: blockdev write read invalid size ...passed 00:13:14.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.219 Test: blockdev write read max offset ...passed 00:13:14.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.219 Test: blockdev writev readv 8 blocks ...passed 00:13:14.219 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.219 Test: blockdev writev readv block ...passed 00:13:14.219 Test: blockdev writev readv size > 128k ...passed 00:13:14.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.219 Test: blockdev comparev and writev ...passed 00:13:14.219 Test: blockdev nvme passthru rw ...passed 00:13:14.219 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.219 Test: blockdev nvme admin passthru ...passed 00:13:14.219 Test: blockdev copy ...passed 00:13:14.219 Suite: bdevio tests on: nvme2n3 00:13:14.219 Test: blockdev write read block ...passed 00:13:14.219 Test: blockdev write zeroes read block ...passed 00:13:14.219 Test: blockdev write zeroes read no split ...passed 00:13:14.219 Test: blockdev write zeroes read split ...passed 00:13:14.219 Test: blockdev write zeroes read split partial ...passed 00:13:14.219 Test: blockdev reset ...passed 00:13:14.219 Test: blockdev write read 8 blocks ...passed 00:13:14.219 Test: blockdev write read size > 128k ...passed 00:13:14.219 Test: blockdev write read invalid size ...passed 00:13:14.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.219 Test: blockdev write read max offset ...passed 00:13:14.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.219 Test: blockdev writev readv 8 blocks ...passed 00:13:14.219 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.219 Test: blockdev writev readv block ...passed 00:13:14.219 Test: blockdev writev readv size > 128k ...passed 00:13:14.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.219 Test: blockdev comparev and writev ...passed 00:13:14.219 Test: blockdev nvme passthru rw ...passed 00:13:14.219 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.219 Test: blockdev nvme admin passthru ...passed 00:13:14.219 Test: blockdev copy ...passed 00:13:14.219 Suite: bdevio tests on: nvme2n2 00:13:14.219 Test: blockdev write read block ...passed 00:13:14.219 Test: blockdev write zeroes read block ...passed 00:13:14.219 Test: blockdev write zeroes read no split ...passed 00:13:14.478 Test: blockdev write zeroes read split ...passed 00:13:14.478 Test: blockdev write zeroes read split partial ...passed 00:13:14.478 Test: blockdev reset ...passed 00:13:14.478 Test: blockdev write read 8 blocks ...passed 00:13:14.478 Test: blockdev write read size > 128k ...passed 00:13:14.478 Test: blockdev write read invalid size ...passed 00:13:14.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.478 Test: blockdev write read max offset ...passed 00:13:14.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.478 Test: blockdev writev readv 8 blocks ...passed 00:13:14.478 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.478 Test: blockdev writev readv block ...passed 00:13:14.478 Test: blockdev writev readv size > 128k ...passed 00:13:14.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.478 Test: blockdev comparev and writev ...passed 00:13:14.478 Test: blockdev nvme passthru rw ...passed 00:13:14.478 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.478 Test: blockdev nvme admin passthru ...passed 00:13:14.478 Test: blockdev copy ...passed 00:13:14.478 Suite: bdevio tests on: nvme2n1 00:13:14.478 Test: blockdev write read block ...passed 00:13:14.478 Test: blockdev write zeroes read block ...passed 00:13:14.478 Test: blockdev write zeroes read no split ...passed 00:13:14.478 Test: blockdev write zeroes read split ...passed 00:13:14.478 Test: blockdev write zeroes read split partial ...passed 00:13:14.478 Test: blockdev reset ...passed 00:13:14.478 Test: blockdev write read 8 blocks ...passed 00:13:14.478 Test: blockdev write read size > 128k ...passed 00:13:14.478 Test: blockdev write read invalid size ...passed 00:13:14.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.479 Test: blockdev write read max offset ...passed 00:13:14.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.479 Test: blockdev writev readv 8 blocks ...passed 00:13:14.479 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.479 Test: blockdev writev readv block ...passed 00:13:14.479 Test: blockdev writev readv size > 128k ...passed 00:13:14.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.479 Test: blockdev comparev and writev ...passed 00:13:14.479 Test: blockdev nvme passthru rw ...passed 00:13:14.479 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.479 Test: blockdev nvme admin passthru ...passed 00:13:14.479 Test: blockdev copy ...passed 00:13:14.479 Suite: bdevio tests on: nvme1n1 00:13:14.479 Test: blockdev write read block ...passed 00:13:14.479 Test: blockdev write zeroes read block ...passed 00:13:14.479 Test: blockdev write zeroes read no split ...passed 00:13:14.479 Test: blockdev write zeroes read split ...passed 00:13:14.479 Test: blockdev write zeroes read split partial ...passed 00:13:14.479 Test: blockdev reset ...passed 00:13:14.479 Test: blockdev write read 8 blocks ...passed 00:13:14.479 Test: blockdev write read size > 128k ...passed 00:13:14.479 Test: blockdev write read invalid size ...passed 00:13:14.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.479 Test: blockdev write read max offset ...passed 00:13:14.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.479 Test: blockdev writev readv 8 blocks ...passed 00:13:14.479 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.479 Test: blockdev writev readv block ...passed 00:13:14.479 Test: blockdev writev readv size > 128k ...passed 00:13:14.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.479 Test: blockdev comparev and writev ...passed 00:13:14.479 Test: blockdev nvme passthru rw ...passed 00:13:14.479 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.479 Test: blockdev nvme admin passthru ...passed 00:13:14.479 Test: blockdev copy ...passed 00:13:14.479 Suite: bdevio tests on: nvme0n1 00:13:14.479 Test: blockdev write read block ...passed 00:13:14.479 Test: blockdev write zeroes read block ...passed 00:13:14.479 Test: blockdev write zeroes read no split ...passed 00:13:14.479 Test: blockdev write zeroes read split ...passed 00:13:14.479 Test: blockdev write zeroes read split partial ...passed 00:13:14.479 Test: blockdev reset ...passed 00:13:14.479 Test: blockdev write read 8 blocks ...passed 00:13:14.479 Test: blockdev write read size > 128k ...passed 00:13:14.479 Test: blockdev write read invalid size ...passed 00:13:14.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:14.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:14.738 Test: blockdev write read max offset ...passed 00:13:14.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:14.738 Test: blockdev writev readv 8 blocks ...passed 00:13:14.738 Test: blockdev writev readv 30 x 1block ...passed 00:13:14.738 Test: blockdev writev readv block ...passed 00:13:14.738 Test: blockdev writev readv size > 128k ...passed 00:13:14.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:14.738 Test: blockdev comparev and writev ...passed 00:13:14.738 Test: blockdev nvme passthru rw ...passed 00:13:14.738 Test: blockdev nvme passthru vendor specific ...passed 00:13:14.738 Test: blockdev nvme admin passthru ...passed 00:13:14.738 Test: blockdev copy ...passed 00:13:14.739 00:13:14.739 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.739 suites 6 6 n/a 0 0 00:13:14.739 tests 138 138 138 0 0 00:13:14.739 asserts 780 780 780 0 n/a 00:13:14.739 00:13:14.739 Elapsed time = 1.128 seconds 00:13:14.739 0 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69932 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69932 ']' 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69932 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69932 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.739 killing process with pid 69932 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69932' 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69932 00:13:14.739 19:56:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69932 00:13:15.307 19:56:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:15.307 00:13:15.307 real 0m2.158s 00:13:15.307 user 0m5.079s 00:13:15.307 sys 0m0.292s 00:13:15.307 19:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.307 19:56:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:15.307 ************************************ 00:13:15.307 END TEST bdev_bounds 00:13:15.307 ************************************ 00:13:15.307 19:56:59 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:15.307 19:56:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:15.307 19:56:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.307 19:56:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.307 ************************************ 00:13:15.307 START TEST bdev_nbd 00:13:15.307 ************************************ 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69986 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69986 /var/tmp/spdk-nbd.sock 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69986 ']' 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.307 19:56:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.566 [2024-09-30 19:56:59.717522] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:15.567 [2024-09-30 19:56:59.717643] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.567 [2024-09-30 19:56:59.866303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.828 [2024-09-30 19:57:00.032573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.401 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.401 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:16.401 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:16.401 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.402 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.662 1+0 records in 00:13:16.662 1+0 records out 00:13:16.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113008 s, 3.6 MB/s 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:16.662 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.663 19:57:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:16.663 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:16.663 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.923 1+0 records in 00:13:16.923 1+0 records out 00:13:16.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460467 s, 8.9 MB/s 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:16.923 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.924 1+0 records in 00:13:16.924 1+0 records out 00:13:16.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927335 s, 4.4 MB/s 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:16.924 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.184 1+0 records in 00:13:17.184 1+0 records out 00:13:17.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111735 s, 3.7 MB/s 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.184 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.446 1+0 records in 00:13:17.446 1+0 records out 00:13:17.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000988103 s, 4.1 MB/s 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.446 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:17.708 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:17.708 19:57:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.708 1+0 records in 00:13:17.708 1+0 records out 00:13:17.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549219 s, 7.5 MB/s 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:17.708 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:17.969 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:17.969 { 00:13:17.969 "nbd_device": "/dev/nbd0", 00:13:17.969 "bdev_name": "nvme0n1" 00:13:17.969 }, 00:13:17.969 { 00:13:17.969 "nbd_device": "/dev/nbd1", 00:13:17.969 "bdev_name": "nvme1n1" 00:13:17.969 }, 00:13:17.969 { 00:13:17.969 "nbd_device": "/dev/nbd2", 00:13:17.969 "bdev_name": "nvme2n1" 00:13:17.969 }, 00:13:17.969 { 00:13:17.969 "nbd_device": "/dev/nbd3", 00:13:17.969 "bdev_name": "nvme2n2" 00:13:17.969 }, 00:13:17.969 { 00:13:17.969 "nbd_device": "/dev/nbd4", 00:13:17.970 "bdev_name": "nvme2n3" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd5", 00:13:17.970 "bdev_name": "nvme3n1" 00:13:17.970 } 00:13:17.970 ]' 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd0", 00:13:17.970 "bdev_name": "nvme0n1" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd1", 00:13:17.970 "bdev_name": "nvme1n1" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd2", 00:13:17.970 "bdev_name": "nvme2n1" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd3", 00:13:17.970 "bdev_name": "nvme2n2" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd4", 00:13:17.970 "bdev_name": "nvme2n3" 00:13:17.970 }, 00:13:17.970 { 00:13:17.970 "nbd_device": "/dev/nbd5", 00:13:17.970 "bdev_name": "nvme3n1" 00:13:17.970 } 00:13:17.970 ]' 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.970 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.232 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.493 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.494 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.755 19:57:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.755 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:19.017 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:19.017 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:19.017 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:19.017 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.017 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.018 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:19.018 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.018 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.018 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.018 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.277 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:19.536 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:19.794 /dev/nbd0 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.794 1+0 records in 00:13:19.794 1+0 records out 00:13:19.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474729 s, 8.6 MB/s 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.794 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:19.795 19:57:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:20.053 /dev/nbd1 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.053 1+0 records in 00:13:20.053 1+0 records out 00:13:20.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508254 s, 8.1 MB/s 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:20.053 /dev/nbd10 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.053 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.312 1+0 records in 00:13:20.312 1+0 records out 00:13:20.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388783 s, 10.5 MB/s 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:20.312 /dev/nbd11 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.312 1+0 records in 00:13:20.312 1+0 records out 00:13:20.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482742 s, 8.5 MB/s 00:13:20.312 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:20.571 /dev/nbd12 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.571 1+0 records in 00:13:20.571 1+0 records out 00:13:20.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347386 s, 11.8 MB/s 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.571 19:57:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:20.832 /dev/nbd13 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.832 1+0 records in 00:13:20.832 1+0 records out 00:13:20.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394051 s, 10.4 MB/s 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.832 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd0", 00:13:21.093 "bdev_name": "nvme0n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd1", 00:13:21.093 "bdev_name": "nvme1n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd10", 00:13:21.093 "bdev_name": "nvme2n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd11", 00:13:21.093 "bdev_name": "nvme2n2" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd12", 00:13:21.093 "bdev_name": "nvme2n3" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd13", 00:13:21.093 "bdev_name": "nvme3n1" 00:13:21.093 } 00:13:21.093 ]' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd0", 00:13:21.093 "bdev_name": "nvme0n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd1", 00:13:21.093 "bdev_name": "nvme1n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd10", 00:13:21.093 "bdev_name": "nvme2n1" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd11", 00:13:21.093 "bdev_name": "nvme2n2" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd12", 00:13:21.093 "bdev_name": "nvme2n3" 00:13:21.093 }, 00:13:21.093 { 00:13:21.093 "nbd_device": "/dev/nbd13", 00:13:21.093 "bdev_name": "nvme3n1" 00:13:21.093 } 00:13:21.093 ]' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:21.093 /dev/nbd1 00:13:21.093 /dev/nbd10 00:13:21.093 /dev/nbd11 00:13:21.093 /dev/nbd12 00:13:21.093 /dev/nbd13' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:21.093 /dev/nbd1 00:13:21.093 /dev/nbd10 00:13:21.093 /dev/nbd11 00:13:21.093 /dev/nbd12 00:13:21.093 /dev/nbd13' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:21.093 256+0 records in 00:13:21.093 256+0 records out 00:13:21.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472373 s, 222 MB/s 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.093 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:21.355 256+0 records in 00:13:21.355 256+0 records out 00:13:21.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063246 s, 16.6 MB/s 00:13:21.355 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.355 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:21.355 256+0 records in 00:13:21.355 256+0 records out 00:13:21.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0932475 s, 11.2 MB/s 00:13:21.355 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.355 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:21.617 256+0 records in 00:13:21.617 256+0 records out 00:13:21.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198459 s, 5.3 MB/s 00:13:21.617 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.617 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:21.617 256+0 records in 00:13:21.617 256+0 records out 00:13:21.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.206606 s, 5.1 MB/s 00:13:21.617 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.617 19:57:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:21.878 256+0 records in 00:13:21.878 256+0 records out 00:13:21.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204286 s, 5.1 MB/s 00:13:21.878 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:21.879 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:22.141 256+0 records in 00:13:22.141 256+0 records out 00:13:22.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189168 s, 5.5 MB/s 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.141 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.403 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.662 19:57:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.920 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.178 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.436 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:23.718 19:57:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:24.017 malloc_lvol_verify 00:13:24.017 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:24.017 82abfbf5-60e2-45be-b2ae-851d04405440 00:13:24.275 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:24.276 98156fab-54e4-4de2-8230-ac59da538ecf 00:13:24.276 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:24.534 /dev/nbd0 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:24.534 mke2fs 1.47.0 (5-Feb-2023) 00:13:24.534 Discarding device blocks: 0/4096 done 00:13:24.534 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:24.534 00:13:24.534 Allocating group tables: 0/1 done 00:13:24.534 Writing inode tables: 0/1 done 00:13:24.534 Creating journal (1024 blocks): done 00:13:24.534 Writing superblocks and filesystem accounting information: 0/1 done 00:13:24.534 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.534 19:57:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:24.792 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.792 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.792 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.792 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69986 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69986 ']' 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69986 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69986 00:13:24.793 killing process with pid 69986 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69986' 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69986 00:13:24.793 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69986 00:13:25.730 19:57:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:25.730 00:13:25.730 real 0m10.150s 00:13:25.730 user 0m13.946s 00:13:25.730 sys 0m3.478s 00:13:25.730 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.730 19:57:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:25.730 ************************************ 00:13:25.730 END TEST bdev_nbd 00:13:25.730 ************************************ 00:13:25.730 19:57:09 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:25.730 19:57:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:25.730 19:57:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:25.730 19:57:09 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:25.730 19:57:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:25.730 19:57:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.730 19:57:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.730 ************************************ 00:13:25.730 START TEST bdev_fio 00:13:25.730 ************************************ 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:25.730 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:25.730 ************************************ 00:13:25.730 START TEST bdev_fio_rw_verify 00:13:25.730 ************************************ 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:25.730 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:25.731 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:25.731 19:57:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:25.990 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.990 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.990 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.991 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.991 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.991 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:25.991 fio-3.35 00:13:25.991 Starting 6 threads 00:13:38.253 00:13:38.253 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70387: Mon Sep 30 19:57:20 2024 00:13:38.253 read: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(472MiB/10002msec) 00:13:38.253 slat (usec): min=2, max=2238, avg= 7.30, stdev=16.09 00:13:38.253 clat (usec): min=101, max=9874, avg=1673.31, stdev=850.01 00:13:38.253 lat (usec): min=105, max=9904, avg=1680.61, stdev=850.59 00:13:38.253 clat percentiles (usec): 00:13:38.253 | 50.000th=[ 1565], 99.000th=[ 4359], 99.900th=[ 5932], 99.990th=[ 7832], 00:13:38.253 | 99.999th=[ 9896] 00:13:38.253 write: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(492MiB/10002msec); 0 zone resets 00:13:38.253 slat (usec): min=12, max=4151, avg=42.99, stdev=153.82 00:13:38.253 clat (usec): min=113, max=8625, avg=1859.69, stdev=887.27 00:13:38.253 lat (usec): min=134, max=8673, avg=1902.68, stdev=900.89 00:13:38.253 clat percentiles (usec): 00:13:38.253 | 50.000th=[ 1745], 99.000th=[ 4555], 99.900th=[ 5997], 99.990th=[ 7701], 00:13:38.253 | 99.999th=[ 8586] 00:13:38.253 bw ( KiB/s): min=42738, max=73619, per=100.00%, avg=50665.42, stdev=1119.45, samples=114 00:13:38.253 iops : min=10684, max=18404, avg=12665.32, stdev=279.87, samples=114 00:13:38.253 lat (usec) : 250=0.64%, 500=3.20%, 750=5.58%, 1000=8.37% 00:13:38.253 lat (msec) : 2=49.04%, 4=31.14%, 10=2.03% 00:13:38.253 cpu : usr=47.08%, sys=30.98%, ctx=4734, majf=0, minf=13209 00:13:38.253 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:38.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.253 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.253 issued rwts: total=120934,126039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.253 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:38.253 00:13:38.253 Run status group 0 (all jobs): 00:13:38.253 READ: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=472MiB (495MB), run=10002-10002msec 00:13:38.253 WRITE: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=492MiB (516MB), run=10002-10002msec 00:13:38.253 ----------------------------------------------------- 00:13:38.253 Suppressions used: 00:13:38.253 count bytes template 00:13:38.253 6 48 /usr/src/fio/parse.c 00:13:38.253 5028 482688 /usr/src/fio/iolog.c 00:13:38.253 1 8 libtcmalloc_minimal.so 00:13:38.253 1 904 libcrypto.so 00:13:38.253 ----------------------------------------------------- 00:13:38.253 00:13:38.253 00:13:38.253 real 0m12.095s 00:13:38.253 user 0m29.819s 00:13:38.253 sys 0m18.984s 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.253 ************************************ 00:13:38.253 END TEST bdev_fio_rw_verify 00:13:38.253 ************************************ 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:38.253 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c8c15684-1615-4c6c-85bb-4e5f65813d94"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c8c15684-1615-4c6c-85bb-4e5f65813d94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c5da27ec-d8ff-4cde-a130-0e3dacd6c69a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c5da27ec-d8ff-4cde-a130-0e3dacd6c69a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bb53eedd-15ea-4605-b32a-27832910e60f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb53eedd-15ea-4605-b32a-27832910e60f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f6e8adea-6986-426d-9f71-4454333b4694"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f6e8adea-6986-426d-9f71-4454333b4694",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "2315cde6-1279-4863-871b-c7d21f130047"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2315cde6-1279-4863-871b-c7d21f130047",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c3ebc461-496c-46b7-b60a-9edf2977ebd5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c3ebc461-496c-46b7-b60a-9edf2977ebd5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.254 /home/vagrant/spdk_repo/spdk 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:38.254 00:13:38.254 real 0m12.269s 00:13:38.254 user 0m29.893s 00:13:38.254 sys 0m19.063s 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.254 ************************************ 00:13:38.254 END TEST bdev_fio 00:13:38.254 ************************************ 00:13:38.254 19:57:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:38.254 19:57:22 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:38.254 19:57:22 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:38.254 19:57:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:38.254 19:57:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.254 19:57:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.254 ************************************ 00:13:38.254 START TEST bdev_verify 00:13:38.254 ************************************ 00:13:38.254 19:57:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:38.254 [2024-09-30 19:57:22.285229] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:38.254 [2024-09-30 19:57:22.285455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70561 ] 00:13:38.254 [2024-09-30 19:57:22.443416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.516 [2024-09-30 19:57:22.719693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.516 [2024-09-30 19:57:22.719822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.090 Running I/O for 5 seconds... 00:13:43.953 24064.00 IOPS, 94.00 MiB/s 23856.00 IOPS, 93.19 MiB/s 23594.67 IOPS, 92.17 MiB/s 23912.00 IOPS, 93.41 MiB/s 24281.60 IOPS, 94.85 MiB/s 00:13:43.953 Latency(us) 00:13:43.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.953 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0xa0000 00:13:43.953 nvme0n1 : 5.06 2101.65 8.21 0.00 0.00 60795.95 9427.10 69367.34 00:13:43.953 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0xa0000 length 0xa0000 00:13:43.953 nvme0n1 : 5.04 1702.04 6.65 0.00 0.00 75031.56 10082.46 74610.22 00:13:43.953 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0xbd0bd 00:13:43.953 nvme1n1 : 5.06 2623.41 10.25 0.00 0.00 48573.94 7007.31 60898.07 00:13:43.953 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:43.953 nvme1n1 : 5.06 2255.55 8.81 0.00 0.00 56491.80 7259.37 66544.25 00:13:43.953 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0x80000 00:13:43.953 nvme2n1 : 5.08 2142.48 8.37 0.00 0.00 59219.96 11947.72 54848.59 00:13:43.953 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x80000 length 0x80000 00:13:43.953 nvme2n1 : 5.04 1726.83 6.75 0.00 0.00 73698.47 7763.50 75013.51 00:13:43.953 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0x80000 00:13:43.953 nvme2n2 : 5.08 2116.66 8.27 0.00 0.00 59841.42 12250.19 60898.07 00:13:43.953 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x80000 length 0x80000 00:13:43.953 nvme2n2 : 5.07 1715.12 6.70 0.00 0.00 74032.53 11594.83 72190.42 00:13:43.953 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0x80000 00:13:43.953 nvme2n3 : 5.07 2097.18 8.19 0.00 0.00 60298.38 10939.47 60898.07 00:13:43.953 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x80000 length 0x80000 00:13:43.953 nvme2n3 : 5.08 1714.58 6.70 0.00 0.00 73852.85 8519.68 68964.04 00:13:43.953 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x0 length 0x20000 00:13:43.953 nvme3n1 : 5.09 2113.72 8.26 0.00 0.00 59732.41 4461.49 60898.07 00:13:43.953 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.953 Verification LBA range: start 0x20000 length 0x20000 00:13:43.953 nvme3n1 : 5.08 1737.31 6.79 0.00 0.00 72691.94 3100.36 72997.02 00:13:43.953 =================================================================================================================== 00:13:43.953 Total : 24046.55 93.93 0.00 0.00 63362.11 3100.36 75013.51 00:13:45.341 00:13:45.341 real 0m7.150s 00:13:45.341 user 0m11.403s 00:13:45.341 sys 0m1.456s 00:13:45.341 19:57:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.341 19:57:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:45.341 ************************************ 00:13:45.341 END TEST bdev_verify 00:13:45.341 ************************************ 00:13:45.341 19:57:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:45.341 19:57:29 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:45.341 19:57:29 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.341 19:57:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.341 ************************************ 00:13:45.341 START TEST bdev_verify_big_io 00:13:45.341 ************************************ 00:13:45.341 19:57:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:45.341 [2024-09-30 19:57:29.503150] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:45.341 [2024-09-30 19:57:29.503330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70661 ] 00:13:45.341 [2024-09-30 19:57:29.659637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.603 [2024-09-30 19:57:29.936837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.603 [2024-09-30 19:57:29.936961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.177 Running I/O for 5 seconds... 00:13:52.676 1944.00 IOPS, 121.50 MiB/s 3436.50 IOPS, 214.78 MiB/s 00:13:52.676 Latency(us) 00:13:52.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.676 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0xa000 00:13:52.676 nvme0n1 : 5.85 109.47 6.84 0.00 0.00 1121961.39 11494.01 1084066.26 00:13:52.676 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0xa000 length 0xa000 00:13:52.676 nvme0n1 : 5.97 112.55 7.03 0.00 0.00 1088742.01 86709.17 1529307.77 00:13:52.676 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0xbd0b 00:13:52.676 nvme1n1 : 5.84 185.43 11.59 0.00 0.00 654461.91 12098.95 729163.62 00:13:52.676 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:52.676 nvme1n1 : 5.89 86.96 5.43 0.00 0.00 1341175.09 11292.36 1380893.93 00:13:52.676 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0x8000 00:13:52.676 nvme2n1 : 5.85 150.44 9.40 0.00 0.00 797560.17 24097.08 851766.35 00:13:52.676 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x8000 length 0x8000 00:13:52.676 nvme2n1 : 6.03 90.24 5.64 0.00 0.00 1221321.19 137121.48 1064707.94 00:13:52.676 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0x8000 00:13:52.676 nvme2n2 : 5.86 87.42 5.46 0.00 0.00 1321879.24 93161.94 2968276.68 00:13:52.676 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x8000 length 0x8000 00:13:52.676 nvme2n2 : 6.11 125.66 7.85 0.00 0.00 841561.27 79449.80 1000180.18 00:13:52.676 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0x8000 00:13:52.676 nvme2n3 : 5.85 156.35 9.77 0.00 0.00 726851.59 19459.15 1374441.16 00:13:52.676 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x8000 length 0x8000 00:13:52.676 nvme2n3 : 6.24 105.11 6.57 0.00 0.00 961894.03 51218.90 3768420.82 00:13:52.676 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x0 length 0x2000 00:13:52.676 nvme3n1 : 5.85 174.90 10.93 0.00 0.00 630976.90 6175.51 645277.54 00:13:52.676 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:52.676 Verification LBA range: start 0x2000 length 0x2000 00:13:52.676 nvme3n1 : 6.46 226.45 14.15 0.00 0.00 430500.38 3780.92 3329632.10 00:13:52.676 =================================================================================================================== 00:13:52.676 Total : 1610.98 100.69 0.00 0.00 838447.49 3780.92 3768420.82 00:13:54.067 00:13:54.067 real 0m8.776s 00:13:54.067 user 0m15.696s 00:13:54.067 sys 0m0.594s 00:13:54.067 19:57:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.067 19:57:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 ************************************ 00:13:54.067 END TEST bdev_verify_big_io 00:13:54.067 ************************************ 00:13:54.067 19:57:38 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:54.067 19:57:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:54.067 19:57:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.067 19:57:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 ************************************ 00:13:54.067 START TEST bdev_write_zeroes 00:13:54.067 ************************************ 00:13:54.067 19:57:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:54.067 [2024-09-30 19:57:38.357464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:54.067 [2024-09-30 19:57:38.357630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70783 ] 00:13:54.329 [2024-09-30 19:57:38.515535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.591 [2024-09-30 19:57:38.790572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.168 Running I/O for 1 seconds... 00:13:56.114 92928.00 IOPS, 363.00 MiB/s 00:13:56.114 Latency(us) 00:13:56.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.114 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme0n1 : 1.01 15263.53 59.62 0.00 0.00 8375.69 5948.65 21878.94 00:13:56.114 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme1n1 : 1.02 16174.31 63.18 0.00 0.00 7896.39 5242.88 15627.82 00:13:56.114 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme2n1 : 1.02 15244.02 59.55 0.00 0.00 8314.88 4738.76 18753.38 00:13:56.114 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme2n2 : 1.03 15226.61 59.48 0.00 0.00 8317.92 4713.55 18652.55 00:13:56.114 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme2n3 : 1.03 15209.42 59.41 0.00 0.00 8318.64 4688.34 18551.73 00:13:56.114 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:56.114 nvme3n1 : 1.02 15148.82 59.18 0.00 0.00 8343.47 4637.93 22685.54 00:13:56.114 =================================================================================================================== 00:13:56.114 Total : 92266.71 360.42 0.00 0.00 8257.43 4637.93 22685.54 00:13:57.058 00:13:57.058 real 0m3.016s 00:13:57.058 user 0m2.264s 00:13:57.058 sys 0m0.563s 00:13:57.058 19:57:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.058 ************************************ 00:13:57.058 END TEST bdev_write_zeroes 00:13:57.058 ************************************ 00:13:57.058 19:57:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:57.058 19:57:41 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.058 19:57:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:57.058 19:57:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.058 19:57:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.058 ************************************ 00:13:57.058 START TEST bdev_json_nonenclosed 00:13:57.058 ************************************ 00:13:57.058 19:57:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.338 [2024-09-30 19:57:41.450067] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:57.338 [2024-09-30 19:57:41.450232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70842 ] 00:13:57.338 [2024-09-30 19:57:41.609632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.637 [2024-09-30 19:57:41.885037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.637 [2024-09-30 19:57:41.885175] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:57.637 [2024-09-30 19:57:41.885198] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:57.637 [2024-09-30 19:57:41.885210] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.901 00:13:57.901 real 0m0.870s 00:13:57.901 user 0m0.624s 00:13:57.901 sys 0m0.138s 00:13:57.901 19:57:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.901 ************************************ 00:13:57.901 END TEST bdev_json_nonenclosed 00:13:57.901 ************************************ 00:13:57.901 19:57:42 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:58.162 19:57:42 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.162 19:57:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:58.162 19:57:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.162 19:57:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.162 ************************************ 00:13:58.162 START TEST bdev_json_nonarray 00:13:58.162 ************************************ 00:13:58.162 19:57:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.162 [2024-09-30 19:57:42.387990] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:58.162 [2024-09-30 19:57:42.388157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70873 ] 00:13:58.423 [2024-09-30 19:57:42.541061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.684 [2024-09-30 19:57:42.809962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.684 [2024-09-30 19:57:42.810100] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:58.684 [2024-09-30 19:57:42.810123] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:58.684 [2024-09-30 19:57:42.810135] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.944 00:13:58.944 real 0m0.853s 00:13:58.944 user 0m0.597s 00:13:58.944 sys 0m0.148s 00:13:58.944 19:57:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.944 ************************************ 00:13:58.944 END TEST bdev_json_nonarray 00:13:58.944 ************************************ 00:13:58.944 19:57:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:58.944 19:57:43 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:59.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:01.434 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:01.434 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:01.434 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:02.378 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:02.378 00:14:02.378 real 0m59.307s 00:14:02.378 user 1m30.032s 00:14:02.378 sys 0m30.624s 00:14:02.378 19:57:46 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.379 19:57:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.379 ************************************ 00:14:02.379 END TEST blockdev_xnvme 00:14:02.379 ************************************ 00:14:02.379 19:57:46 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:02.379 19:57:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.379 19:57:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.379 19:57:46 -- common/autotest_common.sh@10 -- # set +x 00:14:02.379 ************************************ 00:14:02.379 START TEST ublk 00:14:02.379 ************************************ 00:14:02.379 19:57:46 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:02.379 * Looking for test storage... 00:14:02.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:02.379 19:57:46 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:02.379 19:57:46 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:02.379 19:57:46 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:02.641 19:57:46 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.641 19:57:46 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.641 19:57:46 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.641 19:57:46 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.641 19:57:46 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.641 19:57:46 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.641 19:57:46 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:02.641 19:57:46 ublk -- scripts/common.sh@345 -- # : 1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.641 19:57:46 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.641 19:57:46 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@353 -- # local d=1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.641 19:57:46 ublk -- scripts/common.sh@355 -- # echo 1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.641 19:57:46 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@353 -- # local d=2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.641 19:57:46 ublk -- scripts/common.sh@355 -- # echo 2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.641 19:57:46 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.641 19:57:46 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.641 19:57:46 ublk -- scripts/common.sh@368 -- # return 0 00:14:02.641 19:57:46 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.641 19:57:46 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:02.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.641 --rc genhtml_branch_coverage=1 00:14:02.641 --rc genhtml_function_coverage=1 00:14:02.641 --rc genhtml_legend=1 00:14:02.641 --rc geninfo_all_blocks=1 00:14:02.641 --rc geninfo_unexecuted_blocks=1 00:14:02.641 00:14:02.642 ' 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.642 --rc genhtml_branch_coverage=1 00:14:02.642 --rc genhtml_function_coverage=1 00:14:02.642 --rc genhtml_legend=1 00:14:02.642 --rc geninfo_all_blocks=1 00:14:02.642 --rc geninfo_unexecuted_blocks=1 00:14:02.642 00:14:02.642 ' 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.642 --rc genhtml_branch_coverage=1 00:14:02.642 --rc genhtml_function_coverage=1 00:14:02.642 --rc genhtml_legend=1 00:14:02.642 --rc geninfo_all_blocks=1 00:14:02.642 --rc geninfo_unexecuted_blocks=1 00:14:02.642 00:14:02.642 ' 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:02.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.642 --rc genhtml_branch_coverage=1 00:14:02.642 --rc genhtml_function_coverage=1 00:14:02.642 --rc genhtml_legend=1 00:14:02.642 --rc geninfo_all_blocks=1 00:14:02.642 --rc geninfo_unexecuted_blocks=1 00:14:02.642 00:14:02.642 ' 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:02.642 19:57:46 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:02.642 19:57:46 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:02.642 19:57:46 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:02.642 19:57:46 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:02.642 19:57:46 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:02.642 19:57:46 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:02.642 19:57:46 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:02.642 19:57:46 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:02.642 19:57:46 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.642 19:57:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:02.642 ************************************ 00:14:02.642 START TEST test_save_ublk_config 00:14:02.642 ************************************ 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:02.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71175 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71175 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71175 ']' 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:02.642 19:57:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.642 [2024-09-30 19:57:46.905164] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:02.642 [2024-09-30 19:57:46.905342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:14:02.903 [2024-09-30 19:57:47.062265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.164 [2024-09-30 19:57:47.345051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.126 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.126 [2024-09-30 19:57:48.161301] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.126 [2024-09-30 19:57:48.162313] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.126 malloc0 00:14:04.126 [2024-09-30 19:57:48.241448] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:04.126 [2024-09-30 19:57:48.241580] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:04.126 [2024-09-30 19:57:48.241594] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:04.126 [2024-09-30 19:57:48.241606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.126 [2024-09-30 19:57:48.250437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.126 [2024-09-30 19:57:48.250470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.126 [2024-09-30 19:57:48.257311] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.126 [2024-09-30 19:57:48.257463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:04.127 [2024-09-30 19:57:48.274304] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.127 0 00:14:04.127 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.127 19:57:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:04.127 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.127 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:04.389 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.389 19:57:48 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:04.389 "subsystems": [ 00:14:04.389 { 00:14:04.389 "subsystem": "fsdev", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "fsdev_set_opts", 00:14:04.389 "params": { 00:14:04.389 "fsdev_io_pool_size": 65535, 00:14:04.389 "fsdev_io_cache_size": 256 00:14:04.389 } 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "keyring", 00:14:04.389 "config": [] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "iobuf", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "iobuf_set_options", 00:14:04.389 "params": { 00:14:04.389 "small_pool_count": 8192, 00:14:04.389 "large_pool_count": 1024, 00:14:04.389 "small_bufsize": 8192, 00:14:04.389 "large_bufsize": 135168 00:14:04.389 } 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "sock", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "sock_set_default_impl", 00:14:04.389 "params": { 00:14:04.389 "impl_name": "posix" 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "sock_impl_set_options", 00:14:04.389 "params": { 00:14:04.389 "impl_name": "ssl", 00:14:04.389 "recv_buf_size": 4096, 00:14:04.389 "send_buf_size": 4096, 00:14:04.389 "enable_recv_pipe": true, 00:14:04.389 "enable_quickack": false, 00:14:04.389 "enable_placement_id": 0, 00:14:04.389 "enable_zerocopy_send_server": true, 00:14:04.389 "enable_zerocopy_send_client": false, 00:14:04.389 "zerocopy_threshold": 0, 00:14:04.389 "tls_version": 0, 00:14:04.389 "enable_ktls": false 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "sock_impl_set_options", 00:14:04.389 "params": { 00:14:04.389 "impl_name": "posix", 00:14:04.389 "recv_buf_size": 2097152, 00:14:04.389 "send_buf_size": 2097152, 00:14:04.389 "enable_recv_pipe": true, 00:14:04.389 "enable_quickack": false, 00:14:04.389 "enable_placement_id": 0, 00:14:04.389 "enable_zerocopy_send_server": true, 00:14:04.389 "enable_zerocopy_send_client": false, 00:14:04.389 "zerocopy_threshold": 0, 00:14:04.389 "tls_version": 0, 00:14:04.389 "enable_ktls": false 00:14:04.389 } 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "vmd", 00:14:04.389 "config": [] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "accel", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "accel_set_options", 00:14:04.389 "params": { 00:14:04.389 "small_cache_size": 128, 00:14:04.389 "large_cache_size": 16, 00:14:04.389 "task_count": 2048, 00:14:04.389 "sequence_count": 2048, 00:14:04.389 "buf_count": 2048 00:14:04.389 } 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "bdev", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "bdev_set_options", 00:14:04.389 "params": { 00:14:04.389 "bdev_io_pool_size": 65535, 00:14:04.389 "bdev_io_cache_size": 256, 00:14:04.389 "bdev_auto_examine": true, 00:14:04.389 "iobuf_small_cache_size": 128, 00:14:04.389 "iobuf_large_cache_size": 16 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_raid_set_options", 00:14:04.389 "params": { 00:14:04.389 "process_window_size_kb": 1024, 00:14:04.389 "process_max_bandwidth_mb_sec": 0 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_iscsi_set_options", 00:14:04.389 "params": { 00:14:04.389 "timeout_sec": 30 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_nvme_set_options", 00:14:04.389 "params": { 00:14:04.389 "action_on_timeout": "none", 00:14:04.389 "timeout_us": 0, 00:14:04.389 "timeout_admin_us": 0, 00:14:04.389 "keep_alive_timeout_ms": 10000, 00:14:04.389 "arbitration_burst": 0, 00:14:04.389 "low_priority_weight": 0, 00:14:04.389 "medium_priority_weight": 0, 00:14:04.389 "high_priority_weight": 0, 00:14:04.389 "nvme_adminq_poll_period_us": 10000, 00:14:04.389 "nvme_ioq_poll_period_us": 0, 00:14:04.389 "io_queue_requests": 0, 00:14:04.389 "delay_cmd_submit": true, 00:14:04.389 "transport_retry_count": 4, 00:14:04.389 "bdev_retry_count": 3, 00:14:04.389 "transport_ack_timeout": 0, 00:14:04.389 "ctrlr_loss_timeout_sec": 0, 00:14:04.389 "reconnect_delay_sec": 0, 00:14:04.389 "fast_io_fail_timeout_sec": 0, 00:14:04.389 "disable_auto_failback": false, 00:14:04.389 "generate_uuids": false, 00:14:04.389 "transport_tos": 0, 00:14:04.389 "nvme_error_stat": false, 00:14:04.389 "rdma_srq_size": 0, 00:14:04.389 "io_path_stat": false, 00:14:04.389 "allow_accel_sequence": false, 00:14:04.389 "rdma_max_cq_size": 0, 00:14:04.389 "rdma_cm_event_timeout_ms": 0, 00:14:04.389 "dhchap_digests": [ 00:14:04.389 "sha256", 00:14:04.389 "sha384", 00:14:04.389 "sha512" 00:14:04.389 ], 00:14:04.389 "dhchap_dhgroups": [ 00:14:04.389 "null", 00:14:04.389 "ffdhe2048", 00:14:04.389 "ffdhe3072", 00:14:04.389 "ffdhe4096", 00:14:04.389 "ffdhe6144", 00:14:04.389 "ffdhe8192" 00:14:04.389 ] 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_nvme_set_hotplug", 00:14:04.389 "params": { 00:14:04.389 "period_us": 100000, 00:14:04.389 "enable": false 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_malloc_create", 00:14:04.389 "params": { 00:14:04.389 "name": "malloc0", 00:14:04.389 "num_blocks": 8192, 00:14:04.389 "block_size": 4096, 00:14:04.389 "physical_block_size": 4096, 00:14:04.389 "uuid": "f338f726-dfcc-466c-ab2f-2cd56fb62d1e", 00:14:04.389 "optimal_io_boundary": 0, 00:14:04.389 "md_size": 0, 00:14:04.389 "dif_type": 0, 00:14:04.389 "dif_is_head_of_md": false, 00:14:04.389 "dif_pi_format": 0 00:14:04.389 } 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "method": "bdev_wait_for_examine" 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "scsi", 00:14:04.389 "config": null 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "scheduler", 00:14:04.389 "config": [ 00:14:04.389 { 00:14:04.389 "method": "framework_set_scheduler", 00:14:04.389 "params": { 00:14:04.389 "name": "static" 00:14:04.389 } 00:14:04.389 } 00:14:04.389 ] 00:14:04.389 }, 00:14:04.389 { 00:14:04.389 "subsystem": "vhost_scsi", 00:14:04.390 "config": [] 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "subsystem": "vhost_blk", 00:14:04.390 "config": [] 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "subsystem": "ublk", 00:14:04.390 "config": [ 00:14:04.390 { 00:14:04.390 "method": "ublk_create_target", 00:14:04.390 "params": { 00:14:04.390 "cpumask": "1" 00:14:04.390 } 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "method": "ublk_start_disk", 00:14:04.390 "params": { 00:14:04.390 "bdev_name": "malloc0", 00:14:04.390 "ublk_id": 0, 00:14:04.390 "num_queues": 1, 00:14:04.390 "queue_depth": 128 00:14:04.390 } 00:14:04.390 } 00:14:04.390 ] 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "subsystem": "nbd", 00:14:04.390 "config": [] 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "subsystem": "nvmf", 00:14:04.390 "config": [ 00:14:04.390 { 00:14:04.390 "method": "nvmf_set_config", 00:14:04.390 "params": { 00:14:04.390 "discovery_filter": "match_any", 00:14:04.390 "admin_cmd_passthru": { 00:14:04.390 "identify_ctrlr": false 00:14:04.390 }, 00:14:04.390 "dhchap_digests": [ 00:14:04.390 "sha256", 00:14:04.390 "sha384", 00:14:04.390 "sha512" 00:14:04.390 ], 00:14:04.390 "dhchap_dhgroups": [ 00:14:04.390 "null", 00:14:04.390 "ffdhe2048", 00:14:04.390 "ffdhe3072", 00:14:04.390 "ffdhe4096", 00:14:04.390 "ffdhe6144", 00:14:04.390 "ffdhe8192" 00:14:04.390 ] 00:14:04.390 } 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "method": "nvmf_set_max_subsystems", 00:14:04.390 "params": { 00:14:04.390 "max_subsystems": 1024 00:14:04.390 } 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "method": "nvmf_set_crdt", 00:14:04.390 "params": { 00:14:04.390 "crdt1": 0, 00:14:04.390 "crdt2": 0, 00:14:04.390 "crdt3": 0 00:14:04.390 } 00:14:04.390 } 00:14:04.390 ] 00:14:04.390 }, 00:14:04.390 { 00:14:04.390 "subsystem": "iscsi", 00:14:04.390 "config": [ 00:14:04.390 { 00:14:04.390 "method": "iscsi_set_options", 00:14:04.390 "params": { 00:14:04.390 "node_base": "iqn.2016-06.io.spdk", 00:14:04.390 "max_sessions": 128, 00:14:04.390 "max_connections_per_session": 2, 00:14:04.390 "max_queue_depth": 64, 00:14:04.390 "default_time2wait": 2, 00:14:04.390 "default_time2retain": 20, 00:14:04.390 "first_burst_length": 8192, 00:14:04.390 "immediate_data": true, 00:14:04.390 "allow_duplicated_isid": false, 00:14:04.390 "error_recovery_level": 0, 00:14:04.390 "nop_timeout": 60, 00:14:04.390 "nop_in_interval": 30, 00:14:04.390 "disable_chap": false, 00:14:04.390 "require_chap": false, 00:14:04.390 "mutual_chap": false, 00:14:04.390 "chap_group": 0, 00:14:04.390 "max_large_datain_per_connection": 64, 00:14:04.390 "max_r2t_per_connection": 4, 00:14:04.390 "pdu_pool_size": 36864, 00:14:04.390 "immediate_data_pool_size": 16384, 00:14:04.390 "data_out_pool_size": 2048 00:14:04.390 } 00:14:04.390 } 00:14:04.390 ] 00:14:04.390 } 00:14:04.390 ] 00:14:04.390 }' 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71175 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71175 ']' 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71175 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71175 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.390 killing process with pid 71175 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71175' 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71175 00:14:04.390 19:57:48 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71175 00:14:05.777 [2024-09-30 19:57:49.766261] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:05.777 [2024-09-30 19:57:49.806442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:05.777 [2024-09-30 19:57:49.806593] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:05.777 [2024-09-30 19:57:49.817301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:05.777 [2024-09-30 19:57:49.817380] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:05.777 [2024-09-30 19:57:49.817393] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:05.777 [2024-09-30 19:57:49.817438] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:05.777 [2024-09-30 19:57:49.817625] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71235 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71235 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71235 ']' 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.167 19:57:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:07.167 "subsystems": [ 00:14:07.167 { 00:14:07.167 "subsystem": "fsdev", 00:14:07.167 "config": [ 00:14:07.167 { 00:14:07.167 "method": "fsdev_set_opts", 00:14:07.167 "params": { 00:14:07.167 "fsdev_io_pool_size": 65535, 00:14:07.167 "fsdev_io_cache_size": 256 00:14:07.167 } 00:14:07.167 } 00:14:07.167 ] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "keyring", 00:14:07.167 "config": [] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "iobuf", 00:14:07.167 "config": [ 00:14:07.167 { 00:14:07.167 "method": "iobuf_set_options", 00:14:07.167 "params": { 00:14:07.167 "small_pool_count": 8192, 00:14:07.167 "large_pool_count": 1024, 00:14:07.167 "small_bufsize": 8192, 00:14:07.167 "large_bufsize": 135168 00:14:07.167 } 00:14:07.167 } 00:14:07.167 ] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "sock", 00:14:07.167 "config": [ 00:14:07.167 { 00:14:07.167 "method": "sock_set_default_impl", 00:14:07.167 "params": { 00:14:07.167 "impl_name": "posix" 00:14:07.167 } 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "method": "sock_impl_set_options", 00:14:07.167 "params": { 00:14:07.167 "impl_name": "ssl", 00:14:07.167 "recv_buf_size": 4096, 00:14:07.167 "send_buf_size": 4096, 00:14:07.167 "enable_recv_pipe": true, 00:14:07.167 "enable_quickack": false, 00:14:07.167 "enable_placement_id": 0, 00:14:07.167 "enable_zerocopy_send_server": true, 00:14:07.167 "enable_zerocopy_send_client": false, 00:14:07.167 "zerocopy_threshold": 0, 00:14:07.167 "tls_version": 0, 00:14:07.167 "enable_ktls": false 00:14:07.167 } 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "method": "sock_impl_set_options", 00:14:07.167 "params": { 00:14:07.167 "impl_name": "posix", 00:14:07.167 "recv_buf_size": 2097152, 00:14:07.167 "send_buf_size": 2097152, 00:14:07.167 "enable_recv_pipe": true, 00:14:07.167 "enable_quickack": false, 00:14:07.167 "enable_placement_id": 0, 00:14:07.167 "enable_zerocopy_send_server": true, 00:14:07.167 "enable_zerocopy_send_client": false, 00:14:07.167 "zerocopy_threshold": 0, 00:14:07.167 "tls_version": 0, 00:14:07.167 "enable_ktls": false 00:14:07.167 } 00:14:07.167 } 00:14:07.167 ] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "vmd", 00:14:07.167 "config": [] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "accel", 00:14:07.167 "config": [ 00:14:07.167 { 00:14:07.167 "method": "accel_set_options", 00:14:07.167 "params": { 00:14:07.167 "small_cache_size": 128, 00:14:07.167 "large_cache_size": 16, 00:14:07.167 "task_count": 2048, 00:14:07.167 "sequence_count": 2048, 00:14:07.167 "buf_count": 2048 00:14:07.167 } 00:14:07.167 } 00:14:07.167 ] 00:14:07.167 }, 00:14:07.167 { 00:14:07.167 "subsystem": "bdev", 00:14:07.167 "config": [ 00:14:07.167 { 00:14:07.167 "method": "bdev_set_options", 00:14:07.167 "params": { 00:14:07.167 "bdev_io_pool_size": 65535, 00:14:07.167 "bdev_io_cache_size": 256, 00:14:07.167 "bdev_auto_examine": true, 00:14:07.167 "iobuf_small_cache_size": 128, 00:14:07.167 "iobuf_large_cache_size": 16 00:14:07.167 } 00:14:07.167 }, 00:14:07.167 { 00:14:07.168 "method": "bdev_raid_set_options", 00:14:07.168 "params": { 00:14:07.168 "process_window_size_kb": 1024, 00:14:07.168 "process_max_bandwidth_mb_sec": 0 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "bdev_iscsi_set_options", 00:14:07.168 "params": { 00:14:07.168 "timeout_sec": 30 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "bdev_nvme_set_options", 00:14:07.168 "params": { 00:14:07.168 "action_on_timeout": "none", 00:14:07.168 "timeout_us": 0, 00:14:07.168 "timeout_admin_us": 0, 00:14:07.168 "keep_alive_timeout_ms": 10000, 00:14:07.168 "arbitration_burst": 0, 00:14:07.168 "low_priority_weight": 0, 00:14:07.168 "medium_priority_weight": 0, 00:14:07.168 "high_priority_weight": 0, 00:14:07.168 "nvme_adminq_poll_period_us": 10000, 00:14:07.168 "nvme_ioq_poll_period_us": 0, 00:14:07.168 "io_queue_requests": 0, 00:14:07.168 "delay_cmd_submit": true, 00:14:07.168 "transport_retry_count": 4, 00:14:07.168 "bdev_retry_count": 3, 00:14:07.168 "transport_ack_timeout": 0, 00:14:07.168 "ctrlr_loss_timeout_sec": 0, 00:14:07.168 "reconnect_delay_sec": 0, 00:14:07.168 "fast_io_fail_timeout_sec": 0, 00:14:07.168 "disable_auto_failback": false, 00:14:07.168 "generate_uuids": false, 00:14:07.168 "transport_tos": 0, 00:14:07.168 "nvme_error_stat": false, 00:14:07.168 "rdma_srq_size": 0, 00:14:07.168 "io_path_stat": false, 00:14:07.168 "allow_accel_sequence": false, 00:14:07.168 "rdma_max_cq_size": 0, 00:14:07.168 "rdma_cm_event_timeout_ms": 0, 00:14:07.168 "dhchap_digests": [ 00:14:07.168 "sha256", 00:14:07.168 "sha384", 00:14:07.168 "sha512" 00:14:07.168 ], 00:14:07.168 "dhchap_dhgroups": [ 00:14:07.168 "null", 00:14:07.168 "ffdhe2048", 00:14:07.168 "ffdhe3072", 00:14:07.168 "ffdhe4096", 00:14:07.168 "ffdhe6144", 00:14:07.168 "ffdhe8192" 00:14:07.168 ] 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "bdev_nvme_set_hotplug", 00:14:07.168 "params": { 00:14:07.168 "period_us": 100000, 00:14:07.168 "enable": false 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "bdev_malloc_create", 00:14:07.168 "params": { 00:14:07.168 "name": "malloc0", 00:14:07.168 "num_blocks": 8192, 00:14:07.168 "block_size": 4096, 00:14:07.168 "physical_block_size": 4096, 00:14:07.168 "uuid": "f338f726-dfcc-466c-ab2f-2cd56fb62d1e", 00:14:07.168 "optimal_io_boundary": 0, 00:14:07.168 "md_size": 0, 00:14:07.168 "dif_type": 0, 00:14:07.168 "dif_is_head_of_md": false, 00:14:07.168 "dif_pi_format": 0 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "bdev_wait_for_examine" 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "scsi", 00:14:07.168 "config": null 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "scheduler", 00:14:07.168 "config": [ 00:14:07.168 { 00:14:07.168 "method": "framework_set_scheduler", 00:14:07.168 "params": { 00:14:07.168 "name": "static" 00:14:07.168 } 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "vhost_scsi", 00:14:07.168 "config": [] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "vhost_blk", 00:14:07.168 "config": [] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "ublk", 00:14:07.168 "config": [ 00:14:07.168 { 00:14:07.168 "method": "ublk_create_target", 00:14:07.168 "params": { 00:14:07.168 "cpumask": "1" 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "ublk_start_disk", 00:14:07.168 "params": { 00:14:07.168 "bdev_name": "malloc0", 00:14:07.168 "ublk_id": 0, 00:14:07.168 "num_queues": 1, 00:14:07.168 "queue_depth": 128 00:14:07.168 } 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "nbd", 00:14:07.168 "config": [] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "nvmf", 00:14:07.168 "config": [ 00:14:07.168 { 00:14:07.168 "method": "nvmf_set_config", 00:14:07.168 "params": { 00:14:07.168 "discovery_filter": "match_any", 00:14:07.168 "admin_cmd_passthru": { 00:14:07.168 "identify_ctrlr": false 00:14:07.168 }, 00:14:07.168 "dhchap_digests": [ 00:14:07.168 "sha256", 00:14:07.168 "sha384", 00:14:07.168 "sha512" 00:14:07.168 ], 00:14:07.168 "dhchap_dhgroups": [ 00:14:07.168 "null", 00:14:07.168 "ffdhe2048", 00:14:07.168 "ffdhe3072", 00:14:07.168 "ffdhe4096", 00:14:07.168 "ffdhe6144", 00:14:07.168 "ffdhe8192" 00:14:07.168 ] 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "nvmf_set_max_subsystems", 00:14:07.168 "params": { 00:14:07.168 "max_subsystems": 1024 00:14:07.168 } 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "method": "nvmf_set_crdt", 00:14:07.168 "params": { 00:14:07.168 "crdt1": 0, 00:14:07.168 "crdt2": 0, 00:14:07.168 "crdt3": 0 00:14:07.168 } 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 }, 00:14:07.168 { 00:14:07.168 "subsystem": "iscsi", 00:14:07.168 "config": [ 00:14:07.168 { 00:14:07.168 "method": "iscsi_set_options", 00:14:07.168 "params": { 00:14:07.168 "node_base": "iqn.2016-06.io.spdk", 00:14:07.168 "max_sessions": 128, 00:14:07.168 "max_connections_per_session": 2, 00:14:07.168 "max_queue_depth": 64, 00:14:07.168 "default_time2wait": 2, 00:14:07.168 "default_time2retain": 20, 00:14:07.168 "first_burst_length": 8192, 00:14:07.168 "immediate_data": true, 00:14:07.168 "allow_duplicated_isid": false, 00:14:07.168 "error_recovery_level": 0, 00:14:07.168 "nop_timeout": 60, 00:14:07.168 "nop_in_interval": 30, 00:14:07.168 "disable_chap": false, 00:14:07.168 "require_chap": false, 00:14:07.168 "mutual_chap": false, 00:14:07.168 "chap_group": 0, 00:14:07.168 "max_large_datain_per_connection": 64, 00:14:07.168 "max_r2t_per_connection": 4, 00:14:07.168 "pdu_pool_size": 36864, 00:14:07.168 "immediate_data_pool_size": 16384, 00:14:07.168 "data_out_pool_size": 2048 00:14:07.168 } 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 } 00:14:07.168 ] 00:14:07.168 }' 00:14:07.168 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.168 19:57:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.168 19:57:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:07.428 [2024-09-30 19:57:51.539855] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:07.428 [2024-09-30 19:57:51.539999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71235 ] 00:14:07.428 [2024-09-30 19:57:51.691245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.686 [2024-09-30 19:57:51.890377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.253 [2024-09-30 19:57:52.572287] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.253 [2024-09-30 19:57:52.572970] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.253 [2024-09-30 19:57:52.580379] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:08.253 [2024-09-30 19:57:52.580443] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:08.253 [2024-09-30 19:57:52.580449] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:08.253 [2024-09-30 19:57:52.580455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.253 [2024-09-30 19:57:52.589350] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.253 [2024-09-30 19:57:52.589370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.253 [2024-09-30 19:57:52.596288] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.253 [2024-09-30 19:57:52.596379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:08.253 [2024-09-30 19:57:52.613286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71235 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71235 ']' 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71235 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71235 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.512 killing process with pid 71235 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71235' 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71235 00:14:08.512 19:57:52 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71235 00:14:09.447 [2024-09-30 19:57:53.733087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.447 [2024-09-30 19:57:53.757368] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.447 [2024-09-30 19:57:53.757483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.447 [2024-09-30 19:57:53.767292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.447 [2024-09-30 19:57:53.767339] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:09.447 [2024-09-30 19:57:53.767346] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:09.447 [2024-09-30 19:57:53.767373] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:09.447 [2024-09-30 19:57:53.767489] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:10.826 19:57:55 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:10.826 00:14:10.826 real 0m8.274s 00:14:10.826 user 0m5.785s 00:14:10.826 sys 0m3.127s 00:14:10.826 ************************************ 00:14:10.826 END TEST test_save_ublk_config 00:14:10.826 ************************************ 00:14:10.826 19:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.826 19:57:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.826 19:57:55 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71308 00:14:10.826 19:57:55 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:10.826 19:57:55 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71308 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@831 -- # '[' -z 71308 ']' 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.826 19:57:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.826 19:57:55 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:11.086 [2024-09-30 19:57:55.213904] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:11.086 [2024-09-30 19:57:55.214033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71308 ] 00:14:11.086 [2024-09-30 19:57:55.363195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:11.344 [2024-09-30 19:57:55.528617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.344 [2024-09-30 19:57:55.528720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.912 19:57:56 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.912 19:57:56 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:11.912 19:57:56 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:11.912 19:57:56 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.912 19:57:56 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.912 19:57:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.912 ************************************ 00:14:11.912 START TEST test_create_ublk 00:14:11.912 ************************************ 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:11.912 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.912 [2024-09-30 19:57:56.083291] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:11.912 [2024-09-30 19:57:56.084567] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.912 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:11.912 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.912 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:11.912 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.912 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.912 [2024-09-30 19:57:56.263415] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:11.912 [2024-09-30 19:57:56.263752] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:11.912 [2024-09-30 19:57:56.263761] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.912 [2024-09-30 19:57:56.263768] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.912 [2024-09-30 19:57:56.271307] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.912 [2024-09-30 19:57:56.271326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:12.171 [2024-09-30 19:57:56.279298] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:12.171 [2024-09-30 19:57:56.279834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:12.171 [2024-09-30 19:57:56.291316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.171 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:12.171 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.171 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.171 19:57:56 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:12.171 { 00:14:12.171 "ublk_device": "/dev/ublkb0", 00:14:12.171 "id": 0, 00:14:12.171 "queue_depth": 512, 00:14:12.171 "num_queues": 4, 00:14:12.171 "bdev_name": "Malloc0" 00:14:12.171 } 00:14:12.171 ]' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:12.171 19:57:56 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:12.430 fio: verification read phase will never start because write phase uses all of runtime 00:14:12.430 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:12.430 fio-3.35 00:14:12.430 Starting 1 process 00:14:22.396 00:14:22.396 fio_test: (groupid=0, jobs=1): err= 0: pid=71352: Mon Sep 30 19:58:06 2024 00:14:22.396 write: IOPS=17.1k, BW=66.6MiB/s (69.9MB/s)(666MiB/10001msec); 0 zone resets 00:14:22.396 clat (usec): min=31, max=3899, avg=57.84, stdev=91.95 00:14:22.396 lat (usec): min=31, max=3900, avg=58.29, stdev=91.97 00:14:22.396 clat percentiles (usec): 00:14:22.396 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 47], 00:14:22.396 | 30.00th=[ 49], 40.00th=[ 51], 50.00th=[ 54], 60.00th=[ 56], 00:14:22.396 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 66], 95.00th=[ 71], 00:14:22.396 | 99.00th=[ 89], 99.50th=[ 188], 99.90th=[ 1713], 99.95th=[ 2671], 00:14:22.396 | 99.99th=[ 3490] 00:14:22.396 bw ( KiB/s): min=59912, max=83208, per=99.80%, avg=68081.26, stdev=6533.58, samples=19 00:14:22.396 iops : min=14978, max=20802, avg=17020.42, stdev=1633.43, samples=19 00:14:22.396 lat (usec) : 50=34.84%, 100=64.35%, 250=0.52%, 500=0.11%, 750=0.02% 00:14:22.396 lat (usec) : 1000=0.02% 00:14:22.396 lat (msec) : 2=0.06%, 4=0.08% 00:14:22.396 cpu : usr=2.26%, sys=14.88%, ctx=170605, majf=0, minf=796 00:14:22.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.396 issued rwts: total=0,170560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.396 00:14:22.396 Run status group 0 (all jobs): 00:14:22.396 WRITE: bw=66.6MiB/s (69.9MB/s), 66.6MiB/s-66.6MiB/s (69.9MB/s-69.9MB/s), io=666MiB (699MB), run=10001-10001msec 00:14:22.396 00:14:22.396 Disk stats (read/write): 00:14:22.396 ublkb0: ios=0/168699, merge=0/0, ticks=0/8026, in_queue=8027, util=99.08% 00:14:22.396 19:58:06 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.396 [2024-09-30 19:58:06.696753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:22.396 [2024-09-30 19:58:06.742328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:22.396 [2024-09-30 19:58:06.742924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:22.396 [2024-09-30 19:58:06.750300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:22.396 [2024-09-30 19:58:06.750529] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:22.396 [2024-09-30 19:58:06.750538] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.396 19:58:06 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.396 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.655 [2024-09-30 19:58:06.766357] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:22.655 request: 00:14:22.655 { 00:14:22.655 "ublk_id": 0, 00:14:22.655 "method": "ublk_stop_disk", 00:14:22.655 "req_id": 1 00:14:22.655 } 00:14:22.655 Got JSON-RPC error response 00:14:22.655 response: 00:14:22.655 { 00:14:22.655 "code": -19, 00:14:22.655 "message": "No such device" 00:14:22.655 } 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.655 19:58:06 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.655 [2024-09-30 19:58:06.782351] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:22.655 [2024-09-30 19:58:06.784240] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:22.655 [2024-09-30 19:58:06.784284] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.655 19:58:06 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.655 19:58:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.913 19:58:07 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:22.913 ************************************ 00:14:22.913 END TEST test_create_ublk 00:14:22.913 ************************************ 00:14:22.913 19:58:07 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:22.913 00:14:22.913 real 0m11.190s 00:14:22.913 user 0m0.533s 00:14:22.913 sys 0m1.561s 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.913 19:58:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 19:58:07 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:23.172 19:58:07 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:23.172 19:58:07 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:23.172 19:58:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 ************************************ 00:14:23.172 START TEST test_create_multi_ublk 00:14:23.172 ************************************ 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 [2024-09-30 19:58:07.306284] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:23.172 [2024-09-30 19:58:07.307531] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.172 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 [2024-09-30 19:58:07.534404] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:23.172 [2024-09-30 19:58:07.534751] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:23.172 [2024-09-30 19:58:07.534763] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:23.172 [2024-09-30 19:58:07.534772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.430 [2024-09-30 19:58:07.554289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.430 [2024-09-30 19:58:07.554310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.430 [2024-09-30 19:58:07.566286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.430 [2024-09-30 19:58:07.566812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:23.430 [2024-09-30 19:58:07.590289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.430 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.688 [2024-09-30 19:58:07.854394] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:23.688 [2024-09-30 19:58:07.854721] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:23.688 [2024-09-30 19:58:07.854734] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:23.688 [2024-09-30 19:58:07.854748] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.688 [2024-09-30 19:58:07.866305] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.688 [2024-09-30 19:58:07.866321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.688 [2024-09-30 19:58:07.878287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.688 [2024-09-30 19:58:07.878813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:23.688 [2024-09-30 19:58:07.903294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.688 19:58:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-09-30 19:58:08.161393] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:23.946 [2024-09-30 19:58:08.161716] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:23.946 [2024-09-30 19:58:08.161728] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:23.946 [2024-09-30 19:58:08.161735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:23.946 [2024-09-30 19:58:08.173301] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:23.946 [2024-09-30 19:58:08.173320] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:23.946 [2024-09-30 19:58:08.185283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:23.946 [2024-09-30 19:58:08.185821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:23.946 [2024-09-30 19:58:08.214294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.946 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 [2024-09-30 19:58:08.397390] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:24.205 [2024-09-30 19:58:08.397706] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:24.205 [2024-09-30 19:58:08.397714] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:24.205 [2024-09-30 19:58:08.397719] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:24.205 [2024-09-30 19:58:08.405300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:24.205 [2024-09-30 19:58:08.405327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:24.205 [2024-09-30 19:58:08.413298] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:24.205 [2024-09-30 19:58:08.413827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:24.205 [2024-09-30 19:58:08.419394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:24.205 { 00:14:24.205 "ublk_device": "/dev/ublkb0", 00:14:24.205 "id": 0, 00:14:24.205 "queue_depth": 512, 00:14:24.205 "num_queues": 4, 00:14:24.205 "bdev_name": "Malloc0" 00:14:24.205 }, 00:14:24.205 { 00:14:24.205 "ublk_device": "/dev/ublkb1", 00:14:24.205 "id": 1, 00:14:24.205 "queue_depth": 512, 00:14:24.205 "num_queues": 4, 00:14:24.205 "bdev_name": "Malloc1" 00:14:24.205 }, 00:14:24.205 { 00:14:24.205 "ublk_device": "/dev/ublkb2", 00:14:24.205 "id": 2, 00:14:24.205 "queue_depth": 512, 00:14:24.205 "num_queues": 4, 00:14:24.205 "bdev_name": "Malloc2" 00:14:24.205 }, 00:14:24.205 { 00:14:24.205 "ublk_device": "/dev/ublkb3", 00:14:24.205 "id": 3, 00:14:24.205 "queue_depth": 512, 00:14:24.205 "num_queues": 4, 00:14:24.205 "bdev_name": "Malloc3" 00:14:24.205 } 00:14:24.205 ]' 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:24.205 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:24.464 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:24.722 19:58:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:24.722 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:24.722 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:24.722 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:24.722 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-09-30 19:58:09.105357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.980 [2024-09-30 19:58:09.145318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.980 [2024-09-30 19:58:09.145999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.980 [2024-09-30 19:58:09.156286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.980 [2024-09-30 19:58:09.156542] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:24.980 [2024-09-30 19:58:09.156556] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-09-30 19:58:09.164341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.980 [2024-09-30 19:58:09.200318] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.980 [2024-09-30 19:58:09.200959] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.980 [2024-09-30 19:58:09.208309] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.980 [2024-09-30 19:58:09.208530] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:24.980 [2024-09-30 19:58:09.208543] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-09-30 19:58:09.224347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.980 [2024-09-30 19:58:09.254809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.980 [2024-09-30 19:58:09.255649] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.980 [2024-09-30 19:58:09.260294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.980 [2024-09-30 19:58:09.260513] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:24.980 [2024-09-30 19:58:09.260525] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:24.980 [2024-09-30 19:58:09.276350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.980 [2024-09-30 19:58:09.312314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.980 [2024-09-30 19:58:09.312881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.980 [2024-09-30 19:58:09.320296] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.980 [2024-09-30 19:58:09.320516] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:24.980 [2024-09-30 19:58:09.320529] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.980 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:25.237 [2024-09-30 19:58:09.536338] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:25.237 [2024-09-30 19:58:09.538218] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:25.237 [2024-09-30 19:58:09.538244] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:25.237 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:25.237 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.237 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:25.237 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.237 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:25.804 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.804 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:25.804 19:58:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:25.804 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.804 19:58:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.063 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.063 19:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:26.063 19:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:26.063 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.063 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.322 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.322 19:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:26.322 19:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:26.322 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.322 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.587 00:14:26.587 real 0m3.484s 00:14:26.587 user 0m0.838s 00:14:26.587 sys 0m0.151s 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.587 ************************************ 00:14:26.587 END TEST test_create_multi_ublk 00:14:26.587 ************************************ 00:14:26.587 19:58:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 19:58:10 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:26.587 19:58:10 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:26.587 19:58:10 ublk -- ublk/ublk.sh@130 -- # killprocess 71308 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@950 -- # '[' -z 71308 ']' 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@954 -- # kill -0 71308 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@955 -- # uname 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71308 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:26.587 killing process with pid 71308 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71308' 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@969 -- # kill 71308 00:14:26.587 19:58:10 ublk -- common/autotest_common.sh@974 -- # wait 71308 00:14:27.187 [2024-09-30 19:58:11.430595] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:27.187 [2024-09-30 19:58:11.430646] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:28.123 00:14:28.123 real 0m25.566s 00:14:28.123 user 0m34.571s 00:14:28.123 sys 0m11.280s 00:14:28.123 19:58:12 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.123 ************************************ 00:14:28.123 END TEST ublk 00:14:28.123 ************************************ 00:14:28.123 19:58:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.123 19:58:12 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:28.123 19:58:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:28.123 19:58:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.123 19:58:12 -- common/autotest_common.sh@10 -- # set +x 00:14:28.123 ************************************ 00:14:28.123 START TEST ublk_recovery 00:14:28.123 ************************************ 00:14:28.123 19:58:12 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:28.123 * Looking for test storage... 00:14:28.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:28.123 19:58:12 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:28.123 19:58:12 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:28.123 19:58:12 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:28.123 19:58:12 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.123 19:58:12 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.124 19:58:12 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:28.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.124 --rc genhtml_branch_coverage=1 00:14:28.124 --rc genhtml_function_coverage=1 00:14:28.124 --rc genhtml_legend=1 00:14:28.124 --rc geninfo_all_blocks=1 00:14:28.124 --rc geninfo_unexecuted_blocks=1 00:14:28.124 00:14:28.124 ' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:28.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.124 --rc genhtml_branch_coverage=1 00:14:28.124 --rc genhtml_function_coverage=1 00:14:28.124 --rc genhtml_legend=1 00:14:28.124 --rc geninfo_all_blocks=1 00:14:28.124 --rc geninfo_unexecuted_blocks=1 00:14:28.124 00:14:28.124 ' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:28.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.124 --rc genhtml_branch_coverage=1 00:14:28.124 --rc genhtml_function_coverage=1 00:14:28.124 --rc genhtml_legend=1 00:14:28.124 --rc geninfo_all_blocks=1 00:14:28.124 --rc geninfo_unexecuted_blocks=1 00:14:28.124 00:14:28.124 ' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:28.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.124 --rc genhtml_branch_coverage=1 00:14:28.124 --rc genhtml_function_coverage=1 00:14:28.124 --rc genhtml_legend=1 00:14:28.124 --rc geninfo_all_blocks=1 00:14:28.124 --rc geninfo_unexecuted_blocks=1 00:14:28.124 00:14:28.124 ' 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:28.124 19:58:12 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71702 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:28.124 19:58:12 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71702 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71702 ']' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.124 19:58:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.124 [2024-09-30 19:58:12.421327] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:28.124 [2024-09-30 19:58:12.421884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:14:28.384 [2024-09-30 19:58:12.564742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.384 [2024-09-30 19:58:12.727739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.384 [2024-09-30 19:58:12.727878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:28.952 19:58:13 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:28.952 [2024-09-30 19:58:13.268289] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:28.952 [2024-09-30 19:58:13.269558] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.952 19:58:13 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.952 19:58:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.211 malloc0 00:14:29.211 19:58:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.211 19:58:13 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:29.211 19:58:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.211 19:58:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.211 [2024-09-30 19:58:13.356448] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:29.211 [2024-09-30 19:58:13.356535] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:29.211 [2024-09-30 19:58:13.356545] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:29.211 [2024-09-30 19:58:13.356552] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:29.211 [2024-09-30 19:58:13.364425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:29.211 [2024-09-30 19:58:13.364442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:29.211 [2024-09-30 19:58:13.372294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:29.211 [2024-09-30 19:58:13.372418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:29.211 [2024-09-30 19:58:13.383292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:29.211 1 00:14:29.211 19:58:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.211 19:58:13 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:30.146 19:58:14 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71737 00:14:30.146 19:58:14 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:30.146 19:58:14 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:30.146 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.146 fio-3.35 00:14:30.146 Starting 1 process 00:14:35.413 19:58:19 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71702 00:14:35.413 19:58:19 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:40.695 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71702 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:40.695 19:58:24 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:40.695 19:58:24 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71848 00:14:40.695 19:58:24 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.695 19:58:24 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71848 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71848 ']' 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.695 19:58:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:40.695 [2024-09-30 19:58:24.478143] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:40.695 [2024-09-30 19:58:24.478281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71848 ] 00:14:40.695 [2024-09-30 19:58:24.627621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.695 [2024-09-30 19:58:24.801133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.695 [2024-09-30 19:58:24.801152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:41.263 19:58:25 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.263 [2024-09-30 19:58:25.332293] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:41.263 [2024-09-30 19:58:25.333574] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.263 19:58:25 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.263 malloc0 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.263 19:58:25 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.263 [2024-09-30 19:58:25.428403] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:41.263 [2024-09-30 19:58:25.428439] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:41.263 [2024-09-30 19:58:25.428448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:41.263 [2024-09-30 19:58:25.436310] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:41.263 [2024-09-30 19:58:25.436330] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:41.263 1 00:14:41.263 19:58:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.263 19:58:25 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71737 00:14:42.198 [2024-09-30 19:58:26.436355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:42.198 [2024-09-30 19:58:26.444289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:42.198 [2024-09-30 19:58:26.444305] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:43.132 [2024-09-30 19:58:27.444327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:43.132 [2024-09-30 19:58:27.448287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:43.132 [2024-09-30 19:58:27.448301] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:44.508 [2024-09-30 19:58:28.448318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:44.508 [2024-09-30 19:58:28.456292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:44.508 [2024-09-30 19:58:28.456306] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:44.508 [2024-09-30 19:58:28.456314] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:44.508 [2024-09-30 19:58:28.456389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:06.442 [2024-09-30 19:58:49.620308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:06.442 [2024-09-30 19:58:49.626625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:06.442 [2024-09-30 19:58:49.634474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:06.442 [2024-09-30 19:58:49.634493] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:32.977 00:15:32.977 fio_test: (groupid=0, jobs=1): err= 0: pid=71740: Mon Sep 30 19:59:14 2024 00:15:32.977 read: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(3308MiB/60001msec) 00:15:32.977 slat (nsec): min=1111, max=776021, avg=5584.84, stdev=2117.88 00:15:32.977 clat (usec): min=1007, max=30245k, avg=4491.50, stdev=262890.90 00:15:32.977 lat (usec): min=1012, max=30245k, avg=4497.08, stdev=262890.90 00:15:32.977 clat percentiles (usec): 00:15:32.977 | 1.00th=[ 1745], 5.00th=[ 1893], 10.00th=[ 1958], 20.00th=[ 1991], 00:15:32.977 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2073], 00:15:32.977 | 70.00th=[ 2089], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 3359], 00:15:32.977 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 8291], 99.95th=[12256], 00:15:32.977 | 99.99th=[13304] 00:15:32.977 bw ( KiB/s): min=55720, max=120144, per=100.00%, avg=113024.68, stdev=14185.74, samples=59 00:15:32.977 iops : min=13930, max=30036, avg=28256.17, stdev=3546.43, samples=59 00:15:32.977 write: IOPS=14.1k, BW=55.1MiB/s (57.7MB/s)(3304MiB/60001msec); 0 zone resets 00:15:32.977 slat (nsec): min=1290, max=1808.2k, avg=5848.88, stdev=2816.13 00:15:32.977 clat (usec): min=673, max=30245k, avg=4570.53, stdev=263052.54 00:15:32.977 lat (usec): min=693, max=30245k, avg=4576.38, stdev=263052.53 00:15:32.977 clat percentiles (usec): 00:15:32.977 | 1.00th=[ 1762], 5.00th=[ 1942], 10.00th=[ 2024], 20.00th=[ 2073], 00:15:32.977 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:15:32.977 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 3359], 00:15:32.977 | 99.00th=[ 5473], 99.50th=[ 5932], 99.90th=[ 8455], 99.95th=[12256], 00:15:32.977 | 99.99th=[13435] 00:15:32.977 bw ( KiB/s): min=55216, max=120936, per=100.00%, avg=112883.93, stdev=14049.01, samples=59 00:15:32.977 iops : min=13804, max=30234, avg=28220.98, stdev=3512.25, samples=59 00:15:32.977 lat (usec) : 750=0.01% 00:15:32.977 lat (msec) : 2=15.69%, 4=80.95%, 10=3.30%, 20=0.05%, >=2000=0.01% 00:15:32.977 cpu : usr=3.29%, sys=16.70%, ctx=56348, majf=0, minf=13 00:15:32.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:32.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.977 issued rwts: total=846866,845829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.977 00:15:32.977 Run status group 0 (all jobs): 00:15:32.977 READ: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=3308MiB (3469MB), run=60001-60001msec 00:15:32.977 WRITE: bw=55.1MiB/s (57.7MB/s), 55.1MiB/s-55.1MiB/s (57.7MB/s-57.7MB/s), io=3304MiB (3465MB), run=60001-60001msec 00:15:32.977 00:15:32.977 Disk stats (read/write): 00:15:32.977 ublkb1: ios=843733/842568, merge=0/0, ticks=3731383/3723386, in_queue=7454770, util=99.90% 00:15:32.977 19:59:14 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.977 [2024-09-30 19:59:14.651737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:32.977 [2024-09-30 19:59:14.679300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:32.977 [2024-09-30 19:59:14.679455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:32.977 [2024-09-30 19:59:14.695283] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:32.977 [2024-09-30 19:59:14.695401] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:32.977 [2024-09-30 19:59:14.695410] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.977 19:59:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.977 [2024-09-30 19:59:14.706360] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.977 [2024-09-30 19:59:14.708221] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.977 [2024-09-30 19:59:14.708253] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.977 19:59:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:32.977 19:59:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:32.977 19:59:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71848 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71848 ']' 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71848 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71848 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.977 killing process with pid 71848 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71848' 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71848 00:15:32.977 19:59:14 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71848 00:15:32.977 [2024-09-30 19:59:15.861512] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.977 [2024-09-30 19:59:15.861578] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.977 00:15:32.977 real 1m4.467s 00:15:32.977 user 1m44.591s 00:15:32.977 sys 0m25.464s 00:15:32.977 19:59:16 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.977 19:59:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.977 ************************************ 00:15:32.977 END TEST ublk_recovery 00:15:32.977 ************************************ 00:15:32.977 19:59:16 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:32.977 19:59:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.977 19:59:16 -- common/autotest_common.sh@10 -- # set +x 00:15:32.977 19:59:16 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:32.977 19:59:16 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.977 19:59:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:32.977 19:59:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.977 19:59:16 -- common/autotest_common.sh@10 -- # set +x 00:15:32.977 ************************************ 00:15:32.977 START TEST ftl 00:15:32.977 ************************************ 00:15:32.977 19:59:16 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.977 * Looking for test storage... 00:15:32.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.977 19:59:16 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:32.977 19:59:16 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:32.977 19:59:16 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:32.977 19:59:16 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:32.977 19:59:16 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.977 19:59:16 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.977 19:59:16 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.977 19:59:16 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.977 19:59:16 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.977 19:59:16 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.977 19:59:16 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.977 19:59:16 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.977 19:59:16 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.977 19:59:16 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.977 19:59:16 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.977 19:59:16 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:32.977 19:59:16 ftl -- scripts/common.sh@345 -- # : 1 00:15:32.977 19:59:16 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.977 19:59:16 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.977 19:59:16 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:32.978 19:59:16 ftl -- scripts/common.sh@353 -- # local d=1 00:15:32.978 19:59:16 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.978 19:59:16 ftl -- scripts/common.sh@355 -- # echo 1 00:15:32.978 19:59:16 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.978 19:59:16 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:32.978 19:59:16 ftl -- scripts/common.sh@353 -- # local d=2 00:15:32.978 19:59:16 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.978 19:59:16 ftl -- scripts/common.sh@355 -- # echo 2 00:15:32.978 19:59:16 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.978 19:59:16 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.978 19:59:16 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.978 19:59:16 ftl -- scripts/common.sh@368 -- # return 0 00:15:32.978 19:59:16 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.978 19:59:16 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:32.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.978 --rc genhtml_branch_coverage=1 00:15:32.978 --rc genhtml_function_coverage=1 00:15:32.978 --rc genhtml_legend=1 00:15:32.978 --rc geninfo_all_blocks=1 00:15:32.978 --rc geninfo_unexecuted_blocks=1 00:15:32.978 00:15:32.978 ' 00:15:32.978 19:59:16 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:32.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.978 --rc genhtml_branch_coverage=1 00:15:32.978 --rc genhtml_function_coverage=1 00:15:32.978 --rc genhtml_legend=1 00:15:32.978 --rc geninfo_all_blocks=1 00:15:32.978 --rc geninfo_unexecuted_blocks=1 00:15:32.978 00:15:32.978 ' 00:15:32.978 19:59:16 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:32.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.978 --rc genhtml_branch_coverage=1 00:15:32.978 --rc genhtml_function_coverage=1 00:15:32.978 --rc genhtml_legend=1 00:15:32.978 --rc geninfo_all_blocks=1 00:15:32.978 --rc geninfo_unexecuted_blocks=1 00:15:32.978 00:15:32.978 ' 00:15:32.978 19:59:16 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:32.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.978 --rc genhtml_branch_coverage=1 00:15:32.978 --rc genhtml_function_coverage=1 00:15:32.978 --rc genhtml_legend=1 00:15:32.978 --rc geninfo_all_blocks=1 00:15:32.978 --rc geninfo_unexecuted_blocks=1 00:15:32.978 00:15:32.978 ' 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:32.978 19:59:16 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:32.978 19:59:16 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.978 19:59:16 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:32.978 19:59:16 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:32.978 19:59:16 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:32.978 19:59:16 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.978 19:59:16 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.978 19:59:16 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.978 19:59:16 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.978 19:59:16 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:32.978 19:59:16 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:32.978 19:59:16 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:32.978 19:59:16 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.978 19:59:16 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:32.978 19:59:16 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:32.978 19:59:16 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.978 19:59:16 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:32.978 19:59:16 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.978 19:59:16 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:32.978 19:59:16 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:32.978 19:59:16 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:32.978 19:59:16 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.978 19:59:16 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:32.978 19:59:16 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:32.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.978 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.978 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.978 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:32.978 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:33.237 19:59:17 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72653 00:15:33.237 19:59:17 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72653 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@831 -- # '[' -z 72653 ']' 00:15:33.237 19:59:17 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.237 19:59:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:33.237 [2024-09-30 19:59:17.438141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:33.237 [2024-09-30 19:59:17.438473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72653 ] 00:15:33.237 [2024-09-30 19:59:17.583062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.496 [2024-09-30 19:59:17.747737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.062 19:59:18 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.062 19:59:18 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:34.062 19:59:18 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:34.320 19:59:18 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:34.886 19:59:19 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:34.886 19:59:19 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@50 -- # break 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:35.453 19:59:19 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:35.712 19:59:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:35.712 19:59:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:35.712 19:59:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:35.712 19:59:19 ftl -- ftl/ftl.sh@63 -- # break 00:15:35.712 19:59:19 ftl -- ftl/ftl.sh@66 -- # killprocess 72653 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@950 -- # '[' -z 72653 ']' 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@954 -- # kill -0 72653 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@955 -- # uname 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72653 00:15:35.712 killing process with pid 72653 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72653' 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@969 -- # kill 72653 00:15:35.712 19:59:19 ftl -- common/autotest_common.sh@974 -- # wait 72653 00:15:37.091 19:59:21 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:37.091 19:59:21 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:37.091 19:59:21 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:37.091 19:59:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:37.091 19:59:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:37.091 ************************************ 00:15:37.091 START TEST ftl_fio_basic 00:15:37.091 ************************************ 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:37.091 * Looking for test storage... 00:15:37.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.091 --rc genhtml_branch_coverage=1 00:15:37.091 --rc genhtml_function_coverage=1 00:15:37.091 --rc genhtml_legend=1 00:15:37.091 --rc geninfo_all_blocks=1 00:15:37.091 --rc geninfo_unexecuted_blocks=1 00:15:37.091 00:15:37.091 ' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.091 --rc genhtml_branch_coverage=1 00:15:37.091 --rc genhtml_function_coverage=1 00:15:37.091 --rc genhtml_legend=1 00:15:37.091 --rc geninfo_all_blocks=1 00:15:37.091 --rc geninfo_unexecuted_blocks=1 00:15:37.091 00:15:37.091 ' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.091 --rc genhtml_branch_coverage=1 00:15:37.091 --rc genhtml_function_coverage=1 00:15:37.091 --rc genhtml_legend=1 00:15:37.091 --rc geninfo_all_blocks=1 00:15:37.091 --rc geninfo_unexecuted_blocks=1 00:15:37.091 00:15:37.091 ' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.091 --rc genhtml_branch_coverage=1 00:15:37.091 --rc genhtml_function_coverage=1 00:15:37.091 --rc genhtml_legend=1 00:15:37.091 --rc geninfo_all_blocks=1 00:15:37.091 --rc geninfo_unexecuted_blocks=1 00:15:37.091 00:15:37.091 ' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:37.091 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72780 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72780 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72780 ']' 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.092 19:59:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:37.351 [2024-09-30 19:59:21.495711] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:37.351 [2024-09-30 19:59:21.495946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72780 ] 00:15:37.351 [2024-09-30 19:59:21.643750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.610 [2024-09-30 19:59:21.811043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.610 [2024-09-30 19:59:21.811383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.610 [2024-09-30 19:59:21.811353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:38.177 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:38.435 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:38.694 { 00:15:38.694 "name": "nvme0n1", 00:15:38.694 "aliases": [ 00:15:38.694 "ea7dcf77-c366-4f95-a3d4-7f0c7e3f4fda" 00:15:38.694 ], 00:15:38.694 "product_name": "NVMe disk", 00:15:38.694 "block_size": 4096, 00:15:38.694 "num_blocks": 1310720, 00:15:38.694 "uuid": "ea7dcf77-c366-4f95-a3d4-7f0c7e3f4fda", 00:15:38.694 "numa_id": -1, 00:15:38.694 "assigned_rate_limits": { 00:15:38.694 "rw_ios_per_sec": 0, 00:15:38.694 "rw_mbytes_per_sec": 0, 00:15:38.694 "r_mbytes_per_sec": 0, 00:15:38.694 "w_mbytes_per_sec": 0 00:15:38.694 }, 00:15:38.694 "claimed": false, 00:15:38.694 "zoned": false, 00:15:38.694 "supported_io_types": { 00:15:38.694 "read": true, 00:15:38.694 "write": true, 00:15:38.694 "unmap": true, 00:15:38.694 "flush": true, 00:15:38.694 "reset": true, 00:15:38.694 "nvme_admin": true, 00:15:38.694 "nvme_io": true, 00:15:38.694 "nvme_io_md": false, 00:15:38.694 "write_zeroes": true, 00:15:38.694 "zcopy": false, 00:15:38.694 "get_zone_info": false, 00:15:38.694 "zone_management": false, 00:15:38.694 "zone_append": false, 00:15:38.694 "compare": true, 00:15:38.694 "compare_and_write": false, 00:15:38.694 "abort": true, 00:15:38.694 "seek_hole": false, 00:15:38.694 "seek_data": false, 00:15:38.694 "copy": true, 00:15:38.694 "nvme_iov_md": false 00:15:38.694 }, 00:15:38.694 "driver_specific": { 00:15:38.694 "nvme": [ 00:15:38.694 { 00:15:38.694 "pci_address": "0000:00:11.0", 00:15:38.694 "trid": { 00:15:38.694 "trtype": "PCIe", 00:15:38.694 "traddr": "0000:00:11.0" 00:15:38.694 }, 00:15:38.694 "ctrlr_data": { 00:15:38.694 "cntlid": 0, 00:15:38.694 "vendor_id": "0x1b36", 00:15:38.694 "model_number": "QEMU NVMe Ctrl", 00:15:38.694 "serial_number": "12341", 00:15:38.694 "firmware_revision": "8.0.0", 00:15:38.694 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:38.694 "oacs": { 00:15:38.694 "security": 0, 00:15:38.694 "format": 1, 00:15:38.694 "firmware": 0, 00:15:38.694 "ns_manage": 1 00:15:38.694 }, 00:15:38.694 "multi_ctrlr": false, 00:15:38.694 "ana_reporting": false 00:15:38.694 }, 00:15:38.694 "vs": { 00:15:38.694 "nvme_version": "1.4" 00:15:38.694 }, 00:15:38.694 "ns_data": { 00:15:38.694 "id": 1, 00:15:38.694 "can_share": false 00:15:38.694 } 00:15:38.694 } 00:15:38.694 ], 00:15:38.694 "mp_policy": "active_passive" 00:15:38.694 } 00:15:38.694 } 00:15:38.694 ]' 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:38.694 19:59:22 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:38.953 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:38.953 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:38.953 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2a78c483-4eb7-4193-813e-19fa8cea1615 00:15:38.953 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2a78c483-4eb7-4193-813e-19fa8cea1615 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.211 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.469 { 00:15:39.469 "name": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:39.469 "aliases": [ 00:15:39.469 "lvs/nvme0n1p0" 00:15:39.469 ], 00:15:39.469 "product_name": "Logical Volume", 00:15:39.469 "block_size": 4096, 00:15:39.469 "num_blocks": 26476544, 00:15:39.469 "uuid": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:39.469 "assigned_rate_limits": { 00:15:39.469 "rw_ios_per_sec": 0, 00:15:39.469 "rw_mbytes_per_sec": 0, 00:15:39.469 "r_mbytes_per_sec": 0, 00:15:39.469 "w_mbytes_per_sec": 0 00:15:39.469 }, 00:15:39.469 "claimed": false, 00:15:39.469 "zoned": false, 00:15:39.469 "supported_io_types": { 00:15:39.469 "read": true, 00:15:39.469 "write": true, 00:15:39.469 "unmap": true, 00:15:39.469 "flush": false, 00:15:39.469 "reset": true, 00:15:39.469 "nvme_admin": false, 00:15:39.469 "nvme_io": false, 00:15:39.469 "nvme_io_md": false, 00:15:39.469 "write_zeroes": true, 00:15:39.469 "zcopy": false, 00:15:39.469 "get_zone_info": false, 00:15:39.469 "zone_management": false, 00:15:39.469 "zone_append": false, 00:15:39.469 "compare": false, 00:15:39.469 "compare_and_write": false, 00:15:39.469 "abort": false, 00:15:39.469 "seek_hole": true, 00:15:39.469 "seek_data": true, 00:15:39.469 "copy": false, 00:15:39.469 "nvme_iov_md": false 00:15:39.469 }, 00:15:39.469 "driver_specific": { 00:15:39.469 "lvol": { 00:15:39.469 "lvol_store_uuid": "2a78c483-4eb7-4193-813e-19fa8cea1615", 00:15:39.469 "base_bdev": "nvme0n1", 00:15:39.469 "thin_provision": true, 00:15:39.469 "num_allocated_clusters": 0, 00:15:39.469 "snapshot": false, 00:15:39.469 "clone": false, 00:15:39.469 "esnap_clone": false 00:15:39.469 } 00:15:39.469 } 00:15:39.469 } 00:15:39.469 ]' 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:39.469 19:59:23 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.726 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.727 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75448719-2682-4cef-8125-a7ab15f9080f 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.984 { 00:15:39.984 "name": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:39.984 "aliases": [ 00:15:39.984 "lvs/nvme0n1p0" 00:15:39.984 ], 00:15:39.984 "product_name": "Logical Volume", 00:15:39.984 "block_size": 4096, 00:15:39.984 "num_blocks": 26476544, 00:15:39.984 "uuid": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:39.984 "assigned_rate_limits": { 00:15:39.984 "rw_ios_per_sec": 0, 00:15:39.984 "rw_mbytes_per_sec": 0, 00:15:39.984 "r_mbytes_per_sec": 0, 00:15:39.984 "w_mbytes_per_sec": 0 00:15:39.984 }, 00:15:39.984 "claimed": false, 00:15:39.984 "zoned": false, 00:15:39.984 "supported_io_types": { 00:15:39.984 "read": true, 00:15:39.984 "write": true, 00:15:39.984 "unmap": true, 00:15:39.984 "flush": false, 00:15:39.984 "reset": true, 00:15:39.984 "nvme_admin": false, 00:15:39.984 "nvme_io": false, 00:15:39.984 "nvme_io_md": false, 00:15:39.984 "write_zeroes": true, 00:15:39.984 "zcopy": false, 00:15:39.984 "get_zone_info": false, 00:15:39.984 "zone_management": false, 00:15:39.984 "zone_append": false, 00:15:39.984 "compare": false, 00:15:39.984 "compare_and_write": false, 00:15:39.984 "abort": false, 00:15:39.984 "seek_hole": true, 00:15:39.984 "seek_data": true, 00:15:39.984 "copy": false, 00:15:39.984 "nvme_iov_md": false 00:15:39.984 }, 00:15:39.984 "driver_specific": { 00:15:39.984 "lvol": { 00:15:39.984 "lvol_store_uuid": "2a78c483-4eb7-4193-813e-19fa8cea1615", 00:15:39.984 "base_bdev": "nvme0n1", 00:15:39.984 "thin_provision": true, 00:15:39.984 "num_allocated_clusters": 0, 00:15:39.984 "snapshot": false, 00:15:39.984 "clone": false, 00:15:39.984 "esnap_clone": false 00:15:39.984 } 00:15:39.984 } 00:15:39.984 } 00:15:39.984 ]' 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:39.984 19:59:24 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:40.242 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 75448719-2682-4cef-8125-a7ab15f9080f 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=75448719-2682-4cef-8125-a7ab15f9080f 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:40.242 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75448719-2682-4cef-8125-a7ab15f9080f 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:40.500 { 00:15:40.500 "name": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:40.500 "aliases": [ 00:15:40.500 "lvs/nvme0n1p0" 00:15:40.500 ], 00:15:40.500 "product_name": "Logical Volume", 00:15:40.500 "block_size": 4096, 00:15:40.500 "num_blocks": 26476544, 00:15:40.500 "uuid": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:40.500 "assigned_rate_limits": { 00:15:40.500 "rw_ios_per_sec": 0, 00:15:40.500 "rw_mbytes_per_sec": 0, 00:15:40.500 "r_mbytes_per_sec": 0, 00:15:40.500 "w_mbytes_per_sec": 0 00:15:40.500 }, 00:15:40.500 "claimed": false, 00:15:40.500 "zoned": false, 00:15:40.500 "supported_io_types": { 00:15:40.500 "read": true, 00:15:40.500 "write": true, 00:15:40.500 "unmap": true, 00:15:40.500 "flush": false, 00:15:40.500 "reset": true, 00:15:40.500 "nvme_admin": false, 00:15:40.500 "nvme_io": false, 00:15:40.500 "nvme_io_md": false, 00:15:40.500 "write_zeroes": true, 00:15:40.500 "zcopy": false, 00:15:40.500 "get_zone_info": false, 00:15:40.500 "zone_management": false, 00:15:40.500 "zone_append": false, 00:15:40.500 "compare": false, 00:15:40.500 "compare_and_write": false, 00:15:40.500 "abort": false, 00:15:40.500 "seek_hole": true, 00:15:40.500 "seek_data": true, 00:15:40.500 "copy": false, 00:15:40.500 "nvme_iov_md": false 00:15:40.500 }, 00:15:40.500 "driver_specific": { 00:15:40.500 "lvol": { 00:15:40.500 "lvol_store_uuid": "2a78c483-4eb7-4193-813e-19fa8cea1615", 00:15:40.500 "base_bdev": "nvme0n1", 00:15:40.500 "thin_provision": true, 00:15:40.500 "num_allocated_clusters": 0, 00:15:40.500 "snapshot": false, 00:15:40.500 "clone": false, 00:15:40.500 "esnap_clone": false 00:15:40.500 } 00:15:40.500 } 00:15:40.500 } 00:15:40.500 ]' 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:40.500 19:59:24 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 75448719-2682-4cef-8125-a7ab15f9080f -c nvc0n1p0 --l2p_dram_limit 60 00:15:40.760 [2024-09-30 19:59:24.973755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.973795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:40.760 [2024-09-30 19:59:24.973810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:40.760 [2024-09-30 19:59:24.973816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.973871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.973879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:40.760 [2024-09-30 19:59:24.973888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:40.760 [2024-09-30 19:59:24.973894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.973926] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:40.760 [2024-09-30 19:59:24.974453] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:40.760 [2024-09-30 19:59:24.974483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.974489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:40.760 [2024-09-30 19:59:24.974498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:15:40.760 [2024-09-30 19:59:24.974505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.974566] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c65fa1b8-9ae9-4cbb-a8e8-3d2d0dfaa51c 00:15:40.760 [2024-09-30 19:59:24.975862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.975885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:40.760 [2024-09-30 19:59:24.975894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:40.760 [2024-09-30 19:59:24.975903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.982784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.982812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:40.760 [2024-09-30 19:59:24.982820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:15:40.760 [2024-09-30 19:59:24.982827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.982913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.982923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:40.760 [2024-09-30 19:59:24.982929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:15:40.760 [2024-09-30 19:59:24.982939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.982981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.982991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:40.760 [2024-09-30 19:59:24.982998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:40.760 [2024-09-30 19:59:24.983006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.983031] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:40.760 [2024-09-30 19:59:24.986284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.986307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:40.760 [2024-09-30 19:59:24.986316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:15:40.760 [2024-09-30 19:59:24.986322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.986354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.986361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:40.760 [2024-09-30 19:59:24.986370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:40.760 [2024-09-30 19:59:24.986376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.986416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:40.760 [2024-09-30 19:59:24.986536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:40.760 [2024-09-30 19:59:24.986549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:40.760 [2024-09-30 19:59:24.986559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:40.760 [2024-09-30 19:59:24.986569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986580] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:40.760 [2024-09-30 19:59:24.986594] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:40.760 [2024-09-30 19:59:24.986601] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:40.760 [2024-09-30 19:59:24.986607] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:40.760 [2024-09-30 19:59:24.986615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.986621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:40.760 [2024-09-30 19:59:24.986630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:15:40.760 [2024-09-30 19:59:24.986635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.986707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.760 [2024-09-30 19:59:24.986713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:40.760 [2024-09-30 19:59:24.986723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:40.760 [2024-09-30 19:59:24.986729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.760 [2024-09-30 19:59:24.986820] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:40.760 [2024-09-30 19:59:24.986828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:40.760 [2024-09-30 19:59:24.986836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:40.760 [2024-09-30 19:59:24.986856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:40.760 [2024-09-30 19:59:24.986875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:40.760 [2024-09-30 19:59:24.986888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:40.760 [2024-09-30 19:59:24.986895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:40.760 [2024-09-30 19:59:24.986901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:40.760 [2024-09-30 19:59:24.986906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:40.760 [2024-09-30 19:59:24.986913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:40.760 [2024-09-30 19:59:24.986918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:40.760 [2024-09-30 19:59:24.986932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:40.760 [2024-09-30 19:59:24.986952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:40.760 [2024-09-30 19:59:24.986969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.760 [2024-09-30 19:59:24.986984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:40.760 [2024-09-30 19:59:24.986992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:40.760 [2024-09-30 19:59:24.986997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.760 [2024-09-30 19:59:24.987004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:40.760 [2024-09-30 19:59:24.987009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:40.760 [2024-09-30 19:59:24.987015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.760 [2024-09-30 19:59:24.987020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:40.760 [2024-09-30 19:59:24.987028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:40.760 [2024-09-30 19:59:24.987034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:40.760 [2024-09-30 19:59:24.987040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:40.760 [2024-09-30 19:59:24.987045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:40.760 [2024-09-30 19:59:24.987051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:40.760 [2024-09-30 19:59:24.987056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:40.760 [2024-09-30 19:59:24.987063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:40.760 [2024-09-30 19:59:24.987080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.987087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:40.760 [2024-09-30 19:59:24.987092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:40.760 [2024-09-30 19:59:24.987100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.760 [2024-09-30 19:59:24.987106] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:40.760 [2024-09-30 19:59:24.987116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:40.761 [2024-09-30 19:59:24.987122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:40.761 [2024-09-30 19:59:24.987130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.761 [2024-09-30 19:59:24.987136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:40.761 [2024-09-30 19:59:24.987146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:40.761 [2024-09-30 19:59:24.987150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:40.761 [2024-09-30 19:59:24.987157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:40.761 [2024-09-30 19:59:24.987162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:40.761 [2024-09-30 19:59:24.987169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:40.761 [2024-09-30 19:59:24.987177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:40.761 [2024-09-30 19:59:24.987186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:40.761 [2024-09-30 19:59:24.987203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:40.761 [2024-09-30 19:59:24.987209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:40.761 [2024-09-30 19:59:24.987216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:40.761 [2024-09-30 19:59:24.987221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:40.761 [2024-09-30 19:59:24.987228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:40.761 [2024-09-30 19:59:24.987234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:40.761 [2024-09-30 19:59:24.987241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:40.761 [2024-09-30 19:59:24.987247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:40.761 [2024-09-30 19:59:24.987256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:40.761 [2024-09-30 19:59:24.987304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:40.761 [2024-09-30 19:59:24.987312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:40.761 [2024-09-30 19:59:24.987325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:40.761 [2024-09-30 19:59:24.987331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:40.761 [2024-09-30 19:59:24.987339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:40.761 [2024-09-30 19:59:24.987345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.761 [2024-09-30 19:59:24.987353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:40.761 [2024-09-30 19:59:24.987359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:15:40.761 [2024-09-30 19:59:24.987366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.761 [2024-09-30 19:59:24.987451] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:40.761 [2024-09-30 19:59:24.987465] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:43.294 [2024-09-30 19:59:27.095010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.095049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:43.294 [2024-09-30 19:59:27.095061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2107.551 ms 00:15:43.294 [2024-09-30 19:59:27.095069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.128195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.128243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:43.294 [2024-09-30 19:59:27.128257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.946 ms 00:15:43.294 [2024-09-30 19:59:27.128292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.128441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.128458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:43.294 [2024-09-30 19:59:27.128467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:43.294 [2024-09-30 19:59:27.128479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.158338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.158368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:43.294 [2024-09-30 19:59:27.158378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.817 ms 00:15:43.294 [2024-09-30 19:59:27.158385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.158418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.158429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:43.294 [2024-09-30 19:59:27.158437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:43.294 [2024-09-30 19:59:27.158445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.158856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.158874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:43.294 [2024-09-30 19:59:27.158881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:15:43.294 [2024-09-30 19:59:27.158889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.158998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.159006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:43.294 [2024-09-30 19:59:27.159013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:15:43.294 [2024-09-30 19:59:27.159022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.172455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.172481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:43.294 [2024-09-30 19:59:27.172489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.406 ms 00:15:43.294 [2024-09-30 19:59:27.172498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.182378] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:43.294 [2024-09-30 19:59:27.197906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.198063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:43.294 [2024-09-30 19:59:27.198079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.320 ms 00:15:43.294 [2024-09-30 19:59:27.198086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.240375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.240500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:43.294 [2024-09-30 19:59:27.240517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.257 ms 00:15:43.294 [2024-09-30 19:59:27.240524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.240676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.240685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:43.294 [2024-09-30 19:59:27.240696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:15:43.294 [2024-09-30 19:59:27.240704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.258342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.258368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:43.294 [2024-09-30 19:59:27.258379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.595 ms 00:15:43.294 [2024-09-30 19:59:27.258386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.275678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.275781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:43.294 [2024-09-30 19:59:27.275797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.253 ms 00:15:43.294 [2024-09-30 19:59:27.275803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.276261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.294 [2024-09-30 19:59:27.276291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:43.294 [2024-09-30 19:59:27.276300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:15:43.294 [2024-09-30 19:59:27.276306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.294 [2024-09-30 19:59:27.331042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.331157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:43.295 [2024-09-30 19:59:27.331177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.695 ms 00:15:43.295 [2024-09-30 19:59:27.331185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.350709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.350738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:43.295 [2024-09-30 19:59:27.350751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.452 ms 00:15:43.295 [2024-09-30 19:59:27.350758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.368571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.368598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:43.295 [2024-09-30 19:59:27.368608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.775 ms 00:15:43.295 [2024-09-30 19:59:27.368614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.386607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.386633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:43.295 [2024-09-30 19:59:27.386644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:15:43.295 [2024-09-30 19:59:27.386651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.386690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.386699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:43.295 [2024-09-30 19:59:27.386709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:43.295 [2024-09-30 19:59:27.386715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.386791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.295 [2024-09-30 19:59:27.386800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:43.295 [2024-09-30 19:59:27.386808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:43.295 [2024-09-30 19:59:27.386816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.295 [2024-09-30 19:59:27.388003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2413.806 ms, result 0 00:15:43.295 { 00:15:43.295 "name": "ftl0", 00:15:43.295 "uuid": "c65fa1b8-9ae9-4cbb-a8e8-3d2d0dfaa51c" 00:15:43.295 } 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:43.295 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:43.553 [ 00:15:43.553 { 00:15:43.553 "name": "ftl0", 00:15:43.553 "aliases": [ 00:15:43.553 "c65fa1b8-9ae9-4cbb-a8e8-3d2d0dfaa51c" 00:15:43.553 ], 00:15:43.553 "product_name": "FTL disk", 00:15:43.553 "block_size": 4096, 00:15:43.553 "num_blocks": 20971520, 00:15:43.553 "uuid": "c65fa1b8-9ae9-4cbb-a8e8-3d2d0dfaa51c", 00:15:43.553 "assigned_rate_limits": { 00:15:43.553 "rw_ios_per_sec": 0, 00:15:43.553 "rw_mbytes_per_sec": 0, 00:15:43.553 "r_mbytes_per_sec": 0, 00:15:43.553 "w_mbytes_per_sec": 0 00:15:43.553 }, 00:15:43.553 "claimed": false, 00:15:43.553 "zoned": false, 00:15:43.553 "supported_io_types": { 00:15:43.553 "read": true, 00:15:43.553 "write": true, 00:15:43.553 "unmap": true, 00:15:43.553 "flush": true, 00:15:43.553 "reset": false, 00:15:43.553 "nvme_admin": false, 00:15:43.553 "nvme_io": false, 00:15:43.553 "nvme_io_md": false, 00:15:43.553 "write_zeroes": true, 00:15:43.553 "zcopy": false, 00:15:43.553 "get_zone_info": false, 00:15:43.553 "zone_management": false, 00:15:43.553 "zone_append": false, 00:15:43.553 "compare": false, 00:15:43.553 "compare_and_write": false, 00:15:43.553 "abort": false, 00:15:43.554 "seek_hole": false, 00:15:43.554 "seek_data": false, 00:15:43.554 "copy": false, 00:15:43.554 "nvme_iov_md": false 00:15:43.554 }, 00:15:43.554 "driver_specific": { 00:15:43.554 "ftl": { 00:15:43.554 "base_bdev": "75448719-2682-4cef-8125-a7ab15f9080f", 00:15:43.554 "cache": "nvc0n1p0" 00:15:43.554 } 00:15:43.554 } 00:15:43.554 } 00:15:43.554 ] 00:15:43.554 19:59:27 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:43.554 19:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:43.554 19:59:27 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:43.812 19:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:43.812 19:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:44.073 [2024-09-30 19:59:28.204584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.204617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:44.073 [2024-09-30 19:59:28.204626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:44.073 [2024-09-30 19:59:28.204635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.204666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:44.073 [2024-09-30 19:59:28.206899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.206925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:44.073 [2024-09-30 19:59:28.206935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:15:44.073 [2024-09-30 19:59:28.206942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.207353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.207366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:44.073 [2024-09-30 19:59:28.207375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:15:44.073 [2024-09-30 19:59:28.207381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.209835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.209853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:44.073 [2024-09-30 19:59:28.209863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:15:44.073 [2024-09-30 19:59:28.209870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.214539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.214561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:44.073 [2024-09-30 19:59:28.214571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:15:44.073 [2024-09-30 19:59:28.214578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.232839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.232961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:44.073 [2024-09-30 19:59:28.232978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:15:44.073 [2024-09-30 19:59:28.232984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.245341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.245368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:44.073 [2024-09-30 19:59:28.245380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.318 ms 00:15:44.073 [2024-09-30 19:59:28.245386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.245543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.245552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:44.073 [2024-09-30 19:59:28.245561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:15:44.073 [2024-09-30 19:59:28.245569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.263485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.263593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:44.073 [2024-09-30 19:59:28.263608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.897 ms 00:15:44.073 [2024-09-30 19:59:28.263614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.280952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.280983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:44.073 [2024-09-30 19:59:28.280992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.301 ms 00:15:44.073 [2024-09-30 19:59:28.280998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.298079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.298102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:44.073 [2024-09-30 19:59:28.298111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.039 ms 00:15:44.073 [2024-09-30 19:59:28.298117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.314953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.315045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:44.073 [2024-09-30 19:59:28.315059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.757 ms 00:15:44.073 [2024-09-30 19:59:28.315065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.315097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:44.073 [2024-09-30 19:59:28.315110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:44.073 [2024-09-30 19:59:28.315823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:44.073 [2024-09-30 19:59:28.315831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c65fa1b8-9ae9-4cbb-a8e8-3d2d0dfaa51c 00:15:44.073 [2024-09-30 19:59:28.315837] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:44.073 [2024-09-30 19:59:28.315846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:44.073 [2024-09-30 19:59:28.315851] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:44.073 [2024-09-30 19:59:28.315859] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:44.073 [2024-09-30 19:59:28.315864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:44.073 [2024-09-30 19:59:28.315872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:44.073 [2024-09-30 19:59:28.315877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:44.073 [2024-09-30 19:59:28.315883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:44.073 [2024-09-30 19:59:28.315889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:44.073 [2024-09-30 19:59:28.315896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.315902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:44.073 [2024-09-30 19:59:28.315910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:15:44.073 [2024-09-30 19:59:28.315917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.325861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.325948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:44.073 [2024-09-30 19:59:28.325962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.911 ms 00:15:44.073 [2024-09-30 19:59:28.325969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.326256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.073 [2024-09-30 19:59:28.326265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:44.073 [2024-09-30 19:59:28.326287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:15:44.073 [2024-09-30 19:59:28.326292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.362698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.073 [2024-09-30 19:59:28.362727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:44.073 [2024-09-30 19:59:28.362738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.073 [2024-09-30 19:59:28.362745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.362801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.073 [2024-09-30 19:59:28.362809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:44.073 [2024-09-30 19:59:28.362817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.073 [2024-09-30 19:59:28.362822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.362899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.073 [2024-09-30 19:59:28.362908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:44.073 [2024-09-30 19:59:28.362915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.073 [2024-09-30 19:59:28.362921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.362947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.073 [2024-09-30 19:59:28.362953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:44.073 [2024-09-30 19:59:28.362962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.073 [2024-09-30 19:59:28.362967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.073 [2024-09-30 19:59:28.428633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.073 [2024-09-30 19:59:28.428777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:44.073 [2024-09-30 19:59:28.428794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.073 [2024-09-30 19:59:28.428802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:44.332 [2024-09-30 19:59:28.479297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:44.332 [2024-09-30 19:59:28.479425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:44.332 [2024-09-30 19:59:28.479505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:44.332 [2024-09-30 19:59:28.479625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:44.332 [2024-09-30 19:59:28.479690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:44.332 [2024-09-30 19:59:28.479803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.479866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.332 [2024-09-30 19:59:28.479873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:44.332 [2024-09-30 19:59:28.479881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.332 [2024-09-30 19:59:28.479887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.332 [2024-09-30 19:59:28.480042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.433 ms, result 0 00:15:44.332 true 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72780 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72780 ']' 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72780 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72780 00:15:44.332 killing process with pid 72780 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72780' 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72780 00:15:44.332 19:59:28 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72780 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:50.893 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:50.894 19:59:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:50.894 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:50.894 fio-3.35 00:15:50.894 Starting 1 thread 00:15:55.098 00:15:55.099 test: (groupid=0, jobs=1): err= 0: pid=72962: Mon Sep 30 19:59:39 2024 00:15:55.099 read: IOPS=1172, BW=77.8MiB/s (81.6MB/s)(255MiB/3270msec) 00:15:55.099 slat (nsec): min=4072, max=28334, avg=5808.97, stdev=2089.60 00:15:55.099 clat (usec): min=274, max=1199, avg=379.81, stdev=89.11 00:15:55.099 lat (usec): min=282, max=1210, avg=385.62, stdev=89.39 00:15:55.099 clat percentiles (usec): 00:15:55.099 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:15:55.099 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:15:55.099 | 70.00th=[ 396], 80.00th=[ 457], 90.00th=[ 529], 95.00th=[ 594], 00:15:55.099 | 99.00th=[ 676], 99.50th=[ 725], 99.90th=[ 783], 99.95th=[ 898], 00:15:55.099 | 99.99th=[ 1205] 00:15:55.099 write: IOPS=1180, BW=78.4MiB/s (82.2MB/s)(256MiB/3267msec); 0 zone resets 00:15:55.099 slat (nsec): min=14780, max=62751, avg=20449.50, stdev=3647.57 00:15:55.099 clat (usec): min=304, max=1176, avg=430.23, stdev=128.99 00:15:55.099 lat (usec): min=324, max=1197, avg=450.68, stdev=128.90 00:15:55.099 clat percentiles (usec): 00:15:55.099 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 347], 00:15:55.099 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:15:55.099 | 70.00th=[ 437], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 676], 00:15:55.099 | 99.00th=[ 889], 99.50th=[ 971], 99.90th=[ 1057], 99.95th=[ 1106], 00:15:55.099 | 99.99th=[ 1172] 00:15:55.099 bw ( KiB/s): min=58208, max=90712, per=99.16%, avg=79582.67, stdev=11262.05, samples=6 00:15:55.099 iops : min= 856, max= 1334, avg=1170.33, stdev=165.62, samples=6 00:15:55.099 lat (usec) : 500=82.40%, 750=16.06%, 1000=1.38% 00:15:55.099 lat (msec) : 2=0.16% 00:15:55.099 cpu : usr=99.24%, sys=0.09%, ctx=6, majf=0, minf=1169 00:15:55.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.099 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.099 00:15:55.099 Run status group 0 (all jobs): 00:15:55.099 READ: bw=77.8MiB/s (81.6MB/s), 77.8MiB/s-77.8MiB/s (81.6MB/s-81.6MB/s), io=255MiB (267MB), run=3270-3270msec 00:15:55.099 WRITE: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=256MiB (269MB), run=3267-3267msec 00:15:57.010 ----------------------------------------------------- 00:15:57.010 Suppressions used: 00:15:57.010 count bytes template 00:15:57.010 1 5 /usr/src/fio/parse.c 00:15:57.010 1 8 libtcmalloc_minimal.so 00:15:57.010 1 904 libcrypto.so 00:15:57.010 ----------------------------------------------------- 00:15:57.011 00:15:57.011 19:59:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:57.011 19:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.011 19:59:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.011 19:59:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:57.011 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:57.011 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:57.011 fio-3.35 00:15:57.011 Starting 2 threads 00:16:23.567 00:16:23.567 first_half: (groupid=0, jobs=1): err= 0: pid=73065: Mon Sep 30 20:00:03 2024 00:16:23.567 read: IOPS=3048, BW=11.9MiB/s (12.5MB/s)(256MiB/21479msec) 00:16:23.567 slat (nsec): min=2982, max=27165, avg=5406.83, stdev=1007.49 00:16:23.567 clat (usec): min=513, max=275128, avg=35633.92, stdev=21357.59 00:16:23.567 lat (usec): min=517, max=275135, avg=35639.32, stdev=21357.67 00:16:23.567 clat percentiles (msec): 00:16:23.567 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:16:23.567 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:23.567 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 69], 00:16:23.567 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 197], 99.95th=[ 234], 00:16:23.567 | 99.99th=[ 271] 00:16:23.567 write: IOPS=3055, BW=11.9MiB/s (12.5MB/s)(256MiB/21450msec); 0 zone resets 00:16:23.567 slat (usec): min=3, max=212, avg= 6.56, stdev= 2.81 00:16:23.567 clat (usec): min=376, max=40663, avg=6321.91, stdev=6141.05 00:16:23.567 lat (usec): min=389, max=40669, avg=6328.48, stdev=6141.14 00:16:23.567 clat percentiles (usec): 00:16:23.567 | 1.00th=[ 742], 5.00th=[ 873], 10.00th=[ 1188], 20.00th=[ 2671], 00:16:23.567 | 30.00th=[ 3458], 40.00th=[ 4178], 50.00th=[ 4883], 60.00th=[ 5407], 00:16:23.567 | 70.00th=[ 5866], 80.00th=[ 7046], 90.00th=[13566], 95.00th=[19530], 00:16:23.567 | 99.00th=[31327], 99.50th=[32637], 99.90th=[38011], 99.95th=[39060], 00:16:23.567 | 99.99th=[40109] 00:16:23.567 bw ( KiB/s): min= 512, max=41648, per=100.00%, avg=26040.80, stdev=13260.16, samples=20 00:16:23.567 iops : min= 128, max=10412, avg=6510.20, stdev=3315.04, samples=20 00:16:23.567 lat (usec) : 500=0.02%, 750=0.60%, 1000=3.28% 00:16:23.567 lat (msec) : 2=3.24%, 4=11.87%, 10=24.33%, 20=5.73%, 50=47.69% 00:16:23.567 lat (msec) : 100=1.59%, 250=1.65%, 500=0.01% 00:16:23.567 cpu : usr=99.38%, sys=0.07%, ctx=44, majf=0, minf=5542 00:16:23.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:23.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.567 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.567 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.567 second_half: (groupid=0, jobs=1): err= 0: pid=73066: Mon Sep 30 20:00:03 2024 00:16:23.567 read: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(256MiB/21292msec) 00:16:23.567 slat (nsec): min=2949, max=24774, avg=3939.95, stdev=962.21 00:16:23.567 clat (msec): min=10, max=196, avg=36.02, stdev=19.58 00:16:23.567 lat (msec): min=10, max=196, avg=36.03, stdev=19.58 00:16:23.567 clat percentiles (msec): 00:16:23.567 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 30], 00:16:23.567 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:16:23.567 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 64], 00:16:23.567 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 171], 99.95th=[ 174], 00:16:23.567 | 99.99th=[ 180] 00:16:23.567 write: IOPS=3095, BW=12.1MiB/s (12.7MB/s)(256MiB/21169msec); 0 zone resets 00:16:23.567 slat (usec): min=3, max=332, avg= 5.45, stdev= 3.14 00:16:23.567 clat (usec): min=365, max=33365, avg=5570.91, stdev=3887.03 00:16:23.567 lat (usec): min=371, max=33370, avg=5576.36, stdev=3887.38 00:16:23.567 clat percentiles (usec): 00:16:23.567 | 1.00th=[ 889], 5.00th=[ 1680], 10.00th=[ 2311], 20.00th=[ 2835], 00:16:23.567 | 30.00th=[ 3523], 40.00th=[ 4178], 50.00th=[ 4817], 60.00th=[ 5276], 00:16:23.567 | 70.00th=[ 5538], 80.00th=[ 6128], 90.00th=[11207], 95.00th=[13698], 00:16:23.567 | 99.00th=[20317], 99.50th=[23987], 99.90th=[29230], 99.95th=[31851], 00:16:23.567 | 99.99th=[32637] 00:16:23.567 bw ( KiB/s): min= 4256, max=46176, per=96.81%, avg=23663.64, stdev=13227.68, samples=22 00:16:23.567 iops : min= 1064, max=11544, avg=5915.91, stdev=3306.92, samples=22 00:16:23.567 lat (usec) : 500=0.02%, 750=0.21%, 1000=0.47% 00:16:23.567 lat (msec) : 2=2.59%, 4=15.12%, 10=25.17%, 20=5.97%, 50=47.31% 00:16:23.567 lat (msec) : 100=1.54%, 250=1.61% 00:16:23.567 cpu : usr=99.24%, sys=0.16%, ctx=36, majf=0, minf=5573 00:16:23.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:23.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.567 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.567 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.567 00:16:23.567 Run status group 0 (all jobs): 00:16:23.567 READ: bw=23.8MiB/s (25.0MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=512MiB (536MB), run=21292-21479msec 00:16:23.567 WRITE: bw=23.9MiB/s (25.0MB/s), 11.9MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=512MiB (537MB), run=21169-21450msec 00:16:23.567 ----------------------------------------------------- 00:16:23.567 Suppressions used: 00:16:23.567 count bytes template 00:16:23.567 2 10 /usr/src/fio/parse.c 00:16:23.567 3 288 /usr/src/fio/iolog.c 00:16:23.567 1 8 libtcmalloc_minimal.so 00:16:23.567 1 904 libcrypto.so 00:16:23.568 ----------------------------------------------------- 00:16:23.568 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.568 20:00:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:23.568 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:23.568 fio-3.35 00:16:23.568 Starting 1 thread 00:16:38.440 00:16:38.441 test: (groupid=0, jobs=1): err= 0: pid=73346: Mon Sep 30 20:00:20 2024 00:16:38.441 read: IOPS=7882, BW=30.8MiB/s (32.3MB/s)(255MiB/8272msec) 00:16:38.441 slat (nsec): min=2942, max=45844, avg=4439.37, stdev=1214.70 00:16:38.441 clat (usec): min=547, max=34192, avg=16229.61, stdev=2026.78 00:16:38.441 lat (usec): min=551, max=34195, avg=16234.05, stdev=2026.75 00:16:38.441 clat percentiles (usec): 00:16:38.441 | 1.00th=[13960], 5.00th=[14222], 10.00th=[15270], 20.00th=[15533], 00:16:38.441 | 30.00th=[15664], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:16:38.441 | 70.00th=[16057], 80.00th=[16188], 90.00th=[16581], 95.00th=[20579], 00:16:38.441 | 99.00th=[25035], 99.50th=[25822], 99.90th=[31589], 99.95th=[32113], 00:16:38.441 | 99.99th=[33817] 00:16:38.441 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(256MiB/4151msec); 0 zone resets 00:16:38.441 slat (usec): min=4, max=145, avg= 7.21, stdev= 2.66 00:16:38.441 clat (usec): min=475, max=48394, avg=8062.39, stdev=10087.45 00:16:38.441 lat (usec): min=482, max=48401, avg=8069.60, stdev=10087.41 00:16:38.441 clat percentiles (usec): 00:16:38.441 | 1.00th=[ 627], 5.00th=[ 701], 10.00th=[ 750], 20.00th=[ 865], 00:16:38.441 | 30.00th=[ 1057], 40.00th=[ 1500], 50.00th=[ 5604], 60.00th=[ 6259], 00:16:38.441 | 70.00th=[ 7308], 80.00th=[ 8455], 90.00th=[28967], 95.00th=[30802], 00:16:38.441 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:16:38.441 | 99.99th=[44827] 00:16:38.441 bw ( KiB/s): min=15760, max=85080, per=92.24%, avg=58254.22, stdev=19161.55, samples=9 00:16:38.441 iops : min= 3940, max=21270, avg=14563.56, stdev=4790.39, samples=9 00:16:38.441 lat (usec) : 500=0.01%, 750=5.19%, 1000=8.37% 00:16:38.441 lat (msec) : 2=7.00%, 4=0.58%, 10=20.70%, 20=47.47%, 50=10.69% 00:16:38.441 cpu : usr=99.10%, sys=0.16%, ctx=22, majf=0, minf=5565 00:16:38.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:38.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.441 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.441 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.441 00:16:38.441 Run status group 0 (all jobs): 00:16:38.441 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=255MiB (267MB), run=8272-8272msec 00:16:38.441 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=256MiB (268MB), run=4151-4151msec 00:16:38.441 ----------------------------------------------------- 00:16:38.441 Suppressions used: 00:16:38.441 count bytes template 00:16:38.441 1 5 /usr/src/fio/parse.c 00:16:38.441 2 192 /usr/src/fio/iolog.c 00:16:38.441 1 8 libtcmalloc_minimal.so 00:16:38.441 1 904 libcrypto.so 00:16:38.441 ----------------------------------------------------- 00:16:38.441 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.441 Remove shared memory files 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57461 /dev/shm/spdk_tgt_trace.pid71702 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:38.441 ************************************ 00:16:38.441 END TEST ftl_fio_basic 00:16:38.441 ************************************ 00:16:38.441 00:16:38.441 real 1m0.305s 00:16:38.441 user 2m0.451s 00:16:38.441 sys 0m12.978s 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.441 20:00:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 20:00:21 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:38.441 20:00:21 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:38.441 20:00:21 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.441 20:00:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:38.441 ************************************ 00:16:38.441 START TEST ftl_bdevperf 00:16:38.441 ************************************ 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:38.441 * Looking for test storage... 00:16:38.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:38.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.441 --rc genhtml_branch_coverage=1 00:16:38.441 --rc genhtml_function_coverage=1 00:16:38.441 --rc genhtml_legend=1 00:16:38.441 --rc geninfo_all_blocks=1 00:16:38.441 --rc geninfo_unexecuted_blocks=1 00:16:38.441 00:16:38.441 ' 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:38.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.441 --rc genhtml_branch_coverage=1 00:16:38.441 --rc genhtml_function_coverage=1 00:16:38.441 --rc genhtml_legend=1 00:16:38.441 --rc geninfo_all_blocks=1 00:16:38.441 --rc geninfo_unexecuted_blocks=1 00:16:38.441 00:16:38.441 ' 00:16:38.441 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:38.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.441 --rc genhtml_branch_coverage=1 00:16:38.441 --rc genhtml_function_coverage=1 00:16:38.441 --rc genhtml_legend=1 00:16:38.441 --rc geninfo_all_blocks=1 00:16:38.441 --rc geninfo_unexecuted_blocks=1 00:16:38.441 00:16:38.441 ' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:38.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.442 --rc genhtml_branch_coverage=1 00:16:38.442 --rc genhtml_function_coverage=1 00:16:38.442 --rc genhtml_legend=1 00:16:38.442 --rc geninfo_all_blocks=1 00:16:38.442 --rc geninfo_unexecuted_blocks=1 00:16:38.442 00:16:38.442 ' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73573 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73573 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73573 ']' 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.442 20:00:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.442 [2024-09-30 20:00:21.836589] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:38.442 [2024-09-30 20:00:21.836889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73573 ] 00:16:38.442 [2024-09-30 20:00:21.985304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.442 [2024-09-30 20:00:22.167868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:38.442 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:38.701 20:00:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:38.959 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:38.959 { 00:16:38.959 "name": "nvme0n1", 00:16:38.959 "aliases": [ 00:16:38.959 "eb9cebb1-7407-4d31-92ac-076cf163ea11" 00:16:38.959 ], 00:16:38.959 "product_name": "NVMe disk", 00:16:38.959 "block_size": 4096, 00:16:38.959 "num_blocks": 1310720, 00:16:38.959 "uuid": "eb9cebb1-7407-4d31-92ac-076cf163ea11", 00:16:38.959 "numa_id": -1, 00:16:38.959 "assigned_rate_limits": { 00:16:38.959 "rw_ios_per_sec": 0, 00:16:38.959 "rw_mbytes_per_sec": 0, 00:16:38.959 "r_mbytes_per_sec": 0, 00:16:38.959 "w_mbytes_per_sec": 0 00:16:38.959 }, 00:16:38.959 "claimed": true, 00:16:38.959 "claim_type": "read_many_write_one", 00:16:38.959 "zoned": false, 00:16:38.959 "supported_io_types": { 00:16:38.959 "read": true, 00:16:38.959 "write": true, 00:16:38.959 "unmap": true, 00:16:38.959 "flush": true, 00:16:38.959 "reset": true, 00:16:38.959 "nvme_admin": true, 00:16:38.959 "nvme_io": true, 00:16:38.959 "nvme_io_md": false, 00:16:38.959 "write_zeroes": true, 00:16:38.959 "zcopy": false, 00:16:38.959 "get_zone_info": false, 00:16:38.959 "zone_management": false, 00:16:38.959 "zone_append": false, 00:16:38.959 "compare": true, 00:16:38.959 "compare_and_write": false, 00:16:38.959 "abort": true, 00:16:38.959 "seek_hole": false, 00:16:38.959 "seek_data": false, 00:16:38.959 "copy": true, 00:16:38.959 "nvme_iov_md": false 00:16:38.959 }, 00:16:38.959 "driver_specific": { 00:16:38.959 "nvme": [ 00:16:38.959 { 00:16:38.959 "pci_address": "0000:00:11.0", 00:16:38.959 "trid": { 00:16:38.959 "trtype": "PCIe", 00:16:38.959 "traddr": "0000:00:11.0" 00:16:38.959 }, 00:16:38.959 "ctrlr_data": { 00:16:38.959 "cntlid": 0, 00:16:38.959 "vendor_id": "0x1b36", 00:16:38.959 "model_number": "QEMU NVMe Ctrl", 00:16:38.959 "serial_number": "12341", 00:16:38.959 "firmware_revision": "8.0.0", 00:16:38.959 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:38.959 "oacs": { 00:16:38.959 "security": 0, 00:16:38.959 "format": 1, 00:16:38.959 "firmware": 0, 00:16:38.959 "ns_manage": 1 00:16:38.959 }, 00:16:38.959 "multi_ctrlr": false, 00:16:38.959 "ana_reporting": false 00:16:38.959 }, 00:16:38.959 "vs": { 00:16:38.959 "nvme_version": "1.4" 00:16:38.959 }, 00:16:38.959 "ns_data": { 00:16:38.960 "id": 1, 00:16:38.960 "can_share": false 00:16:38.960 } 00:16:38.960 } 00:16:38.960 ], 00:16:38.960 "mp_policy": "active_passive" 00:16:38.960 } 00:16:38.960 } 00:16:38.960 ]' 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:38.960 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:39.218 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2a78c483-4eb7-4193-813e-19fa8cea1615 00:16:39.218 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:39.218 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a78c483-4eb7-4193-813e-19fa8cea1615 00:16:39.476 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:39.733 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=47a9fc0c-cfab-4946-b85d-9dda46c1bbdd 00:16:39.733 20:00:23 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 47a9fc0c-cfab-4946-b85d-9dda46c1bbdd 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:39.734 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:39.991 { 00:16:39.991 "name": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:39.991 "aliases": [ 00:16:39.991 "lvs/nvme0n1p0" 00:16:39.991 ], 00:16:39.991 "product_name": "Logical Volume", 00:16:39.991 "block_size": 4096, 00:16:39.991 "num_blocks": 26476544, 00:16:39.991 "uuid": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:39.991 "assigned_rate_limits": { 00:16:39.991 "rw_ios_per_sec": 0, 00:16:39.991 "rw_mbytes_per_sec": 0, 00:16:39.991 "r_mbytes_per_sec": 0, 00:16:39.991 "w_mbytes_per_sec": 0 00:16:39.991 }, 00:16:39.991 "claimed": false, 00:16:39.991 "zoned": false, 00:16:39.991 "supported_io_types": { 00:16:39.991 "read": true, 00:16:39.991 "write": true, 00:16:39.991 "unmap": true, 00:16:39.991 "flush": false, 00:16:39.991 "reset": true, 00:16:39.991 "nvme_admin": false, 00:16:39.991 "nvme_io": false, 00:16:39.991 "nvme_io_md": false, 00:16:39.991 "write_zeroes": true, 00:16:39.991 "zcopy": false, 00:16:39.991 "get_zone_info": false, 00:16:39.991 "zone_management": false, 00:16:39.991 "zone_append": false, 00:16:39.991 "compare": false, 00:16:39.991 "compare_and_write": false, 00:16:39.991 "abort": false, 00:16:39.991 "seek_hole": true, 00:16:39.991 "seek_data": true, 00:16:39.991 "copy": false, 00:16:39.991 "nvme_iov_md": false 00:16:39.991 }, 00:16:39.991 "driver_specific": { 00:16:39.991 "lvol": { 00:16:39.991 "lvol_store_uuid": "47a9fc0c-cfab-4946-b85d-9dda46c1bbdd", 00:16:39.991 "base_bdev": "nvme0n1", 00:16:39.991 "thin_provision": true, 00:16:39.991 "num_allocated_clusters": 0, 00:16:39.991 "snapshot": false, 00:16:39.991 "clone": false, 00:16:39.991 "esnap_clone": false 00:16:39.991 } 00:16:39.991 } 00:16:39.991 } 00:16:39.991 ]' 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:39.991 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:40.248 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:40.506 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:40.506 { 00:16:40.506 "name": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:40.506 "aliases": [ 00:16:40.506 "lvs/nvme0n1p0" 00:16:40.506 ], 00:16:40.506 "product_name": "Logical Volume", 00:16:40.506 "block_size": 4096, 00:16:40.506 "num_blocks": 26476544, 00:16:40.506 "uuid": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:40.506 "assigned_rate_limits": { 00:16:40.506 "rw_ios_per_sec": 0, 00:16:40.506 "rw_mbytes_per_sec": 0, 00:16:40.506 "r_mbytes_per_sec": 0, 00:16:40.506 "w_mbytes_per_sec": 0 00:16:40.506 }, 00:16:40.506 "claimed": false, 00:16:40.506 "zoned": false, 00:16:40.506 "supported_io_types": { 00:16:40.506 "read": true, 00:16:40.506 "write": true, 00:16:40.506 "unmap": true, 00:16:40.506 "flush": false, 00:16:40.506 "reset": true, 00:16:40.506 "nvme_admin": false, 00:16:40.506 "nvme_io": false, 00:16:40.506 "nvme_io_md": false, 00:16:40.506 "write_zeroes": true, 00:16:40.506 "zcopy": false, 00:16:40.506 "get_zone_info": false, 00:16:40.506 "zone_management": false, 00:16:40.506 "zone_append": false, 00:16:40.506 "compare": false, 00:16:40.506 "compare_and_write": false, 00:16:40.506 "abort": false, 00:16:40.506 "seek_hole": true, 00:16:40.506 "seek_data": true, 00:16:40.506 "copy": false, 00:16:40.506 "nvme_iov_md": false 00:16:40.506 }, 00:16:40.506 "driver_specific": { 00:16:40.506 "lvol": { 00:16:40.506 "lvol_store_uuid": "47a9fc0c-cfab-4946-b85d-9dda46c1bbdd", 00:16:40.506 "base_bdev": "nvme0n1", 00:16:40.506 "thin_provision": true, 00:16:40.506 "num_allocated_clusters": 0, 00:16:40.506 "snapshot": false, 00:16:40.506 "clone": false, 00:16:40.506 "esnap_clone": false 00:16:40.506 } 00:16:40.506 } 00:16:40.506 } 00:16:40.506 ]' 00:16:40.506 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:40.506 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:40.506 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:40.764 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:40.764 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:40.764 20:00:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:40.764 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:40.764 20:00:24 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:40.764 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2905157e-eb44-42c0-9fde-acdf9a79ff97 00:16:41.021 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:41.021 { 00:16:41.021 "name": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:41.021 "aliases": [ 00:16:41.021 "lvs/nvme0n1p0" 00:16:41.021 ], 00:16:41.021 "product_name": "Logical Volume", 00:16:41.021 "block_size": 4096, 00:16:41.022 "num_blocks": 26476544, 00:16:41.022 "uuid": "2905157e-eb44-42c0-9fde-acdf9a79ff97", 00:16:41.022 "assigned_rate_limits": { 00:16:41.022 "rw_ios_per_sec": 0, 00:16:41.022 "rw_mbytes_per_sec": 0, 00:16:41.022 "r_mbytes_per_sec": 0, 00:16:41.022 "w_mbytes_per_sec": 0 00:16:41.022 }, 00:16:41.022 "claimed": false, 00:16:41.022 "zoned": false, 00:16:41.022 "supported_io_types": { 00:16:41.022 "read": true, 00:16:41.022 "write": true, 00:16:41.022 "unmap": true, 00:16:41.022 "flush": false, 00:16:41.022 "reset": true, 00:16:41.022 "nvme_admin": false, 00:16:41.022 "nvme_io": false, 00:16:41.022 "nvme_io_md": false, 00:16:41.022 "write_zeroes": true, 00:16:41.022 "zcopy": false, 00:16:41.022 "get_zone_info": false, 00:16:41.022 "zone_management": false, 00:16:41.022 "zone_append": false, 00:16:41.022 "compare": false, 00:16:41.022 "compare_and_write": false, 00:16:41.022 "abort": false, 00:16:41.022 "seek_hole": true, 00:16:41.022 "seek_data": true, 00:16:41.022 "copy": false, 00:16:41.022 "nvme_iov_md": false 00:16:41.022 }, 00:16:41.022 "driver_specific": { 00:16:41.022 "lvol": { 00:16:41.022 "lvol_store_uuid": "47a9fc0c-cfab-4946-b85d-9dda46c1bbdd", 00:16:41.022 "base_bdev": "nvme0n1", 00:16:41.022 "thin_provision": true, 00:16:41.022 "num_allocated_clusters": 0, 00:16:41.022 "snapshot": false, 00:16:41.022 "clone": false, 00:16:41.022 "esnap_clone": false 00:16:41.022 } 00:16:41.022 } 00:16:41.022 } 00:16:41.022 ]' 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:41.022 20:00:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2905157e-eb44-42c0-9fde-acdf9a79ff97 -c nvc0n1p0 --l2p_dram_limit 20 00:16:41.281 [2024-09-30 20:00:25.547552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.547619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:41.281 [2024-09-30 20:00:25.547631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:41.281 [2024-09-30 20:00:25.547640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.547696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.547705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.281 [2024-09-30 20:00:25.547712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:41.281 [2024-09-30 20:00:25.547720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.547733] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:41.281 [2024-09-30 20:00:25.548401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:41.281 [2024-09-30 20:00:25.548416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.548424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.281 [2024-09-30 20:00:25.548431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:16:41.281 [2024-09-30 20:00:25.548440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.548468] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d38d42a8-747f-4a73-9d7b-a3e665623a57 00:16:41.281 [2024-09-30 20:00:25.549793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.549827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:41.281 [2024-09-30 20:00:25.549841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:41.281 [2024-09-30 20:00:25.549849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.556686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.556852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.281 [2024-09-30 20:00:25.556868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:16:41.281 [2024-09-30 20:00:25.556875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.556955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.556963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.281 [2024-09-30 20:00:25.556975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:41.281 [2024-09-30 20:00:25.556981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.557032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.557040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:41.281 [2024-09-30 20:00:25.557050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:41.281 [2024-09-30 20:00:25.557057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.557077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.281 [2024-09-30 20:00:25.560398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.560427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.281 [2024-09-30 20:00:25.560434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:16:41.281 [2024-09-30 20:00:25.560443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.560473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.560481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:41.281 [2024-09-30 20:00:25.560487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:41.281 [2024-09-30 20:00:25.560495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.560508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:41.281 [2024-09-30 20:00:25.560624] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:41.281 [2024-09-30 20:00:25.560634] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:41.281 [2024-09-30 20:00:25.560646] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:41.281 [2024-09-30 20:00:25.560654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:41.281 [2024-09-30 20:00:25.560663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:41.281 [2024-09-30 20:00:25.560670] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:41.281 [2024-09-30 20:00:25.560680] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:41.281 [2024-09-30 20:00:25.560686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:41.281 [2024-09-30 20:00:25.560693] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:41.281 [2024-09-30 20:00:25.560700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.560708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:41.281 [2024-09-30 20:00:25.560714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:41.281 [2024-09-30 20:00:25.560722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.560785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-09-30 20:00:25.560793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:41.281 [2024-09-30 20:00:25.560799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:41.281 [2024-09-30 20:00:25.560810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-09-30 20:00:25.560879] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:41.281 [2024-09-30 20:00:25.560889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:41.281 [2024-09-30 20:00:25.560895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.281 [2024-09-30 20:00:25.560903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.281 [2024-09-30 20:00:25.560909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:41.281 [2024-09-30 20:00:25.560916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:41.281 [2024-09-30 20:00:25.560921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:41.281 [2024-09-30 20:00:25.560928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:41.281 [2024-09-30 20:00:25.560933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:41.281 [2024-09-30 20:00:25.560940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.281 [2024-09-30 20:00:25.560946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:41.281 [2024-09-30 20:00:25.560959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:41.281 [2024-09-30 20:00:25.560964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.281 [2024-09-30 20:00:25.560971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:41.281 [2024-09-30 20:00:25.560976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:41.281 [2024-09-30 20:00:25.560987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.281 [2024-09-30 20:00:25.560992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:41.281 [2024-09-30 20:00:25.560999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:41.281 [2024-09-30 20:00:25.561004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.281 [2024-09-30 20:00:25.561010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:41.281 [2024-09-30 20:00:25.561015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:41.281 [2024-09-30 20:00:25.561022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.281 [2024-09-30 20:00:25.561027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:41.281 [2024-09-30 20:00:25.561034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:41.281 [2024-09-30 20:00:25.561040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.281 [2024-09-30 20:00:25.561048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:41.282 [2024-09-30 20:00:25.561054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.282 [2024-09-30 20:00:25.561065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:41.282 [2024-09-30 20:00:25.561072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.282 [2024-09-30 20:00:25.561087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:41.282 [2024-09-30 20:00:25.561092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.282 [2024-09-30 20:00:25.561103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:41.282 [2024-09-30 20:00:25.561110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:41.282 [2024-09-30 20:00:25.561115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.282 [2024-09-30 20:00:25.561122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:41.282 [2024-09-30 20:00:25.561127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:41.282 [2024-09-30 20:00:25.561134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:41.282 [2024-09-30 20:00:25.561145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:41.282 [2024-09-30 20:00:25.561150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561159] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:41.282 [2024-09-30 20:00:25.561165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:41.282 [2024-09-30 20:00:25.561172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.282 [2024-09-30 20:00:25.561177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.282 [2024-09-30 20:00:25.561186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:41.282 [2024-09-30 20:00:25.561193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:41.282 [2024-09-30 20:00:25.561200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:41.282 [2024-09-30 20:00:25.561205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:41.282 [2024-09-30 20:00:25.561211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:41.282 [2024-09-30 20:00:25.561216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:41.282 [2024-09-30 20:00:25.561226] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:41.282 [2024-09-30 20:00:25.561235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:41.282 [2024-09-30 20:00:25.561255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:41.282 [2024-09-30 20:00:25.561262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:41.282 [2024-09-30 20:00:25.561283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:41.282 [2024-09-30 20:00:25.561291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:41.282 [2024-09-30 20:00:25.561297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:41.282 [2024-09-30 20:00:25.561305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:41.282 [2024-09-30 20:00:25.561310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:41.282 [2024-09-30 20:00:25.561319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:41.282 [2024-09-30 20:00:25.561324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:41.282 [2024-09-30 20:00:25.561358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:41.282 [2024-09-30 20:00:25.561366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:41.282 [2024-09-30 20:00:25.561392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:41.282 [2024-09-30 20:00:25.561400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:41.282 [2024-09-30 20:00:25.561406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:41.282 [2024-09-30 20:00:25.561414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.282 [2024-09-30 20:00:25.561420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:41.282 [2024-09-30 20:00:25.561427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:16:41.282 [2024-09-30 20:00:25.561433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-09-30 20:00:25.561477] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:41.282 [2024-09-30 20:00:25.561486] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:43.810 [2024-09-30 20:00:27.698163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.698239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:43.811 [2024-09-30 20:00:27.698259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2136.668 ms 00:16:43.811 [2024-09-30 20:00:27.698285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.739989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.740262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.811 [2024-09-30 20:00:27.740300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.402 ms 00:16:43.811 [2024-09-30 20:00:27.740309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.740486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.740498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.811 [2024-09-30 20:00:27.740515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:43.811 [2024-09-30 20:00:27.740523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.773339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.773388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.811 [2024-09-30 20:00:27.773408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.764 ms 00:16:43.811 [2024-09-30 20:00:27.773416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.773464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.773472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.811 [2024-09-30 20:00:27.773483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.811 [2024-09-30 20:00:27.773490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.773957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.773974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.811 [2024-09-30 20:00:27.773985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:43.811 [2024-09-30 20:00:27.773993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.774130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.774142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.811 [2024-09-30 20:00:27.774154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:43.811 [2024-09-30 20:00:27.774162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.787674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.787706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.811 [2024-09-30 20:00:27.787719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.493 ms 00:16:43.811 [2024-09-30 20:00:27.787728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.799740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:43.811 [2024-09-30 20:00:27.805712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.805759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.811 [2024-09-30 20:00:27.805786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.892 ms 00:16:43.811 [2024-09-30 20:00:27.805797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.868032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.868096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:43.811 [2024-09-30 20:00:27.868111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.198 ms 00:16:43.811 [2024-09-30 20:00:27.868121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.868333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.868350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.811 [2024-09-30 20:00:27.868360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:43.811 [2024-09-30 20:00:27.868370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.891790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.891839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:43.811 [2024-09-30 20:00:27.891851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.375 ms 00:16:43.811 [2024-09-30 20:00:27.891864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.914065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.914106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:43.811 [2024-09-30 20:00:27.914118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.164 ms 00:16:43.811 [2024-09-30 20:00:27.914128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.914714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.914740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.811 [2024-09-30 20:00:27.914752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:43.811 [2024-09-30 20:00:27.914761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:27.984899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:27.984964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:43.811 [2024-09-30 20:00:27.984980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.104 ms 00:16:43.811 [2024-09-30 20:00:27.984991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.010013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:28.010067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:43.811 [2024-09-30 20:00:28.010081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.952 ms 00:16:43.811 [2024-09-30 20:00:28.010091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.034317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:28.034386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:43.811 [2024-09-30 20:00:28.034399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.185 ms 00:16:43.811 [2024-09-30 20:00:28.034408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.057021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:28.057066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.811 [2024-09-30 20:00:28.057078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.577 ms 00:16:43.811 [2024-09-30 20:00:28.057088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.057127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:28.057142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.811 [2024-09-30 20:00:28.057151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:43.811 [2024-09-30 20:00:28.057161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.057242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.811 [2024-09-30 20:00:28.057257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.811 [2024-09-30 20:00:28.057265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:43.811 [2024-09-30 20:00:28.057295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.811 [2024-09-30 20:00:28.058243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2510.221 ms, result 0 00:16:43.811 { 00:16:43.811 "name": "ftl0", 00:16:43.811 "uuid": "d38d42a8-747f-4a73-9d7b-a3e665623a57" 00:16:43.811 } 00:16:43.811 20:00:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:43.811 20:00:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:43.811 20:00:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:44.070 20:00:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:44.070 [2024-09-30 20:00:28.374542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:44.070 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:44.070 Zero copy mechanism will not be used. 00:16:44.070 Running I/O for 4 seconds... 00:16:48.309 3211.00 IOPS, 213.23 MiB/s 3267.00 IOPS, 216.95 MiB/s 3227.67 IOPS, 214.34 MiB/s 3170.75 IOPS, 210.56 MiB/s 00:16:48.309 Latency(us) 00:16:48.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.309 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:48.309 ftl0 : 4.00 3169.39 210.47 0.00 0.00 332.89 152.02 2092.11 00:16:48.309 =================================================================================================================== 00:16:48.309 Total : 3169.39 210.47 0.00 0.00 332.89 152.02 2092.11 00:16:48.309 { 00:16:48.309 "results": [ 00:16:48.309 { 00:16:48.309 "job": "ftl0", 00:16:48.309 "core_mask": "0x1", 00:16:48.309 "workload": "randwrite", 00:16:48.309 "status": "finished", 00:16:48.309 "queue_depth": 1, 00:16:48.309 "io_size": 69632, 00:16:48.309 "runtime": 4.00203, 00:16:48.309 "iops": 3169.391533796598, 00:16:48.309 "mibps": 210.46740654118034, 00:16:48.309 "io_failed": 0, 00:16:48.309 "io_timeout": 0, 00:16:48.309 "avg_latency_us": 332.8852029206996, 00:16:48.309 "min_latency_us": 152.02461538461537, 00:16:48.309 "max_latency_us": 2092.110769230769 00:16:48.309 } 00:16:48.309 ], 00:16:48.309 "core_count": 1 00:16:48.309 } 00:16:48.309 [2024-09-30 20:00:32.385301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:48.309 20:00:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:48.309 [2024-09-30 20:00:32.488940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:48.309 Running I/O for 4 seconds... 00:16:52.491 10659.00 IOPS, 41.64 MiB/s 10951.00 IOPS, 42.78 MiB/s 10874.67 IOPS, 42.48 MiB/s 10868.50 IOPS, 42.46 MiB/s 00:16:52.491 Latency(us) 00:16:52.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.491 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.491 ftl0 : 4.02 10835.81 42.33 0.00 0.00 11780.17 196.92 29440.79 00:16:52.491 =================================================================================================================== 00:16:52.491 Total : 10835.81 42.33 0.00 0.00 11780.17 0.00 29440.79 00:16:52.491 { 00:16:52.491 "results": [ 00:16:52.491 { 00:16:52.491 "job": "ftl0", 00:16:52.491 "core_mask": "0x1", 00:16:52.491 "workload": "randwrite", 00:16:52.491 "status": "finished", 00:16:52.491 "queue_depth": 128, 00:16:52.491 "io_size": 4096, 00:16:52.491 "runtime": 4.023697, 00:16:52.491 "iops": 10835.805976444051, 00:16:52.491 "mibps": 42.327367095484576, 00:16:52.491 "io_failed": 0, 00:16:52.491 "io_timeout": 0, 00:16:52.491 "avg_latency_us": 11780.172248976713, 00:16:52.491 "min_latency_us": 196.92307692307693, 00:16:52.491 "max_latency_us": 29440.78769230769 00:16:52.491 } 00:16:52.491 ], 00:16:52.491 "core_count": 1 00:16:52.491 } 00:16:52.491 [2024-09-30 20:00:36.521753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:52.491 20:00:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:52.491 [2024-09-30 20:00:36.632222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:52.491 Running I/O for 4 seconds... 00:16:56.672 8830.00 IOPS, 34.49 MiB/s 8555.00 IOPS, 33.42 MiB/s 8658.33 IOPS, 33.82 MiB/s 8714.25 IOPS, 34.04 MiB/s 00:16:56.672 Latency(us) 00:16:56.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.672 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:56.672 Verification LBA range: start 0x0 length 0x1400000 00:16:56.672 ftl0 : 4.01 8726.25 34.09 0.00 0.00 14622.98 234.73 82272.89 00:16:56.672 =================================================================================================================== 00:16:56.672 Total : 8726.25 34.09 0.00 0.00 14622.98 0.00 82272.89 00:16:56.672 [2024-09-30 20:00:40.656995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:56.672 { 00:16:56.672 "results": [ 00:16:56.672 { 00:16:56.672 "job": "ftl0", 00:16:56.672 "core_mask": "0x1", 00:16:56.672 "workload": "verify", 00:16:56.672 "status": "finished", 00:16:56.672 "verify_range": { 00:16:56.672 "start": 0, 00:16:56.672 "length": 20971520 00:16:56.672 }, 00:16:56.672 "queue_depth": 128, 00:16:56.672 "io_size": 4096, 00:16:56.672 "runtime": 4.009053, 00:16:56.672 "iops": 8726.250313976892, 00:16:56.672 "mibps": 34.086915288972236, 00:16:56.672 "io_failed": 0, 00:16:56.672 "io_timeout": 0, 00:16:56.672 "avg_latency_us": 14622.97643230312, 00:16:56.672 "min_latency_us": 234.7323076923077, 00:16:56.672 "max_latency_us": 82272.88615384615 00:16:56.672 } 00:16:56.672 ], 00:16:56.672 "core_count": 1 00:16:56.672 } 00:16:56.672 20:00:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:56.672 [2024-09-30 20:00:40.863372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-09-30 20:00:40.863629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.672 [2024-09-30 20:00:40.863689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.672 [2024-09-30 20:00:40.863716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-09-30 20:00:40.863758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.672 [2024-09-30 20:00:40.866539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-09-30 20:00:40.866665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.672 [2024-09-30 20:00:40.866729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:16:56.672 [2024-09-30 20:00:40.866756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-09-30 20:00:40.868587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-09-30 20:00:40.868699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.672 [2024-09-30 20:00:40.868767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:16:56.672 [2024-09-30 20:00:40.868791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-09-30 20:00:41.009752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-09-30 20:00:41.009934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.672 [2024-09-30 20:00:41.009961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.923 ms 00:16:56.672 [2024-09-30 20:00:41.009971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.672 [2024-09-30 20:00:41.016135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.672 [2024-09-30 20:00:41.016165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:56.672 [2024-09-30 20:00:41.016177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:16:56.672 [2024-09-30 20:00:41.016185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.932 [2024-09-30 20:00:41.040466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.040501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.933 [2024-09-30 20:00:41.040515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:16:56.933 [2024-09-30 20:00:41.040522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.055956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.056080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.933 [2024-09-30 20:00:41.056102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.397 ms 00:16:56.933 [2024-09-30 20:00:41.056111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.056249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.056260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.933 [2024-09-30 20:00:41.056295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:56.933 [2024-09-30 20:00:41.056303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.079698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.079730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:56.933 [2024-09-30 20:00:41.079742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.378 ms 00:16:56.933 [2024-09-30 20:00:41.079749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.102404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.102517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:56.933 [2024-09-30 20:00:41.102536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.620 ms 00:16:56.933 [2024-09-30 20:00:41.102543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.124796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.124828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.933 [2024-09-30 20:00:41.124840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.222 ms 00:16:56.933 [2024-09-30 20:00:41.124848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.147149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.933 [2024-09-30 20:00:41.147180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.933 [2024-09-30 20:00:41.147195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:16:56.933 [2024-09-30 20:00:41.147202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.933 [2024-09-30 20:00:41.147235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.933 [2024-09-30 20:00:41.147251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.933 [2024-09-30 20:00:41.147803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.147996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.934 [2024-09-30 20:00:41.148181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.934 [2024-09-30 20:00:41.148191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d38d42a8-747f-4a73-9d7b-a3e665623a57 00:16:56.934 [2024-09-30 20:00:41.148200] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.934 [2024-09-30 20:00:41.148210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.934 [2024-09-30 20:00:41.148222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.934 [2024-09-30 20:00:41.148232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.934 [2024-09-30 20:00:41.148239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.934 [2024-09-30 20:00:41.148248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.934 [2024-09-30 20:00:41.148255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.934 [2024-09-30 20:00:41.148265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.934 [2024-09-30 20:00:41.148281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.934 [2024-09-30 20:00:41.148289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.934 [2024-09-30 20:00:41.148299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.934 [2024-09-30 20:00:41.148310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:16:56.934 [2024-09-30 20:00:41.148318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.161144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.934 [2024-09-30 20:00:41.161176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.934 [2024-09-30 20:00:41.161189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.795 ms 00:16:56.934 [2024-09-30 20:00:41.161197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.161578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.934 [2024-09-30 20:00:41.161593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.934 [2024-09-30 20:00:41.161604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:16:56.934 [2024-09-30 20:00:41.161611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.193417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.934 [2024-09-30 20:00:41.193613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.934 [2024-09-30 20:00:41.193637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.934 [2024-09-30 20:00:41.193645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.193712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.934 [2024-09-30 20:00:41.193721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.934 [2024-09-30 20:00:41.193730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.934 [2024-09-30 20:00:41.193738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.193826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.934 [2024-09-30 20:00:41.193838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.934 [2024-09-30 20:00:41.193848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.934 [2024-09-30 20:00:41.193856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.193874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.934 [2024-09-30 20:00:41.193883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.934 [2024-09-30 20:00:41.193893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.934 [2024-09-30 20:00:41.193900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.934 [2024-09-30 20:00:41.275243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.934 [2024-09-30 20:00:41.275321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.934 [2024-09-30 20:00:41.275339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.934 [2024-09-30 20:00:41.275348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.341784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.341849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.194 [2024-09-30 20:00:41.341863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.341871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.341976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.341986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:57.194 [2024-09-30 20:00:41.341997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.342059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:57.194 [2024-09-30 20:00:41.342071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.342184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:57.194 [2024-09-30 20:00:41.342198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.342245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:57.194 [2024-09-30 20:00:41.342254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.342337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:57.194 [2024-09-30 20:00:41.342347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:57.194 [2024-09-30 20:00:41.342411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:57.194 [2024-09-30 20:00:41.342423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:57.194 [2024-09-30 20:00:41.342430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.194 [2024-09-30 20:00:41.342567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 479.148 ms, result 0 00:16:57.194 true 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73573 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73573 ']' 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73573 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73573 00:16:57.194 killing process with pid 73573 00:16:57.194 Received shutdown signal, test time was about 4.000000 seconds 00:16:57.194 00:16:57.194 Latency(us) 00:16:57.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.194 =================================================================================================================== 00:16:57.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73573' 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73573 00:16:57.194 20:00:41 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73573 00:17:01.380 Remove shared memory files 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:01.380 ************************************ 00:17:01.380 END TEST ftl_bdevperf 00:17:01.380 ************************************ 00:17:01.380 00:17:01.380 real 0m24.020s 00:17:01.380 user 0m26.677s 00:17:01.380 sys 0m0.940s 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.380 20:00:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:01.380 20:00:45 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:01.380 20:00:45 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:01.380 20:00:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.380 20:00:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:01.380 ************************************ 00:17:01.380 START TEST ftl_trim 00:17:01.380 ************************************ 00:17:01.380 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:01.380 * Looking for test storage... 00:17:01.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.380 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:01.380 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:01.380 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.639 20:00:45 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:01.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.639 --rc genhtml_branch_coverage=1 00:17:01.639 --rc genhtml_function_coverage=1 00:17:01.639 --rc genhtml_legend=1 00:17:01.639 --rc geninfo_all_blocks=1 00:17:01.639 --rc geninfo_unexecuted_blocks=1 00:17:01.639 00:17:01.639 ' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:01.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.639 --rc genhtml_branch_coverage=1 00:17:01.639 --rc genhtml_function_coverage=1 00:17:01.639 --rc genhtml_legend=1 00:17:01.639 --rc geninfo_all_blocks=1 00:17:01.639 --rc geninfo_unexecuted_blocks=1 00:17:01.639 00:17:01.639 ' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:01.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.639 --rc genhtml_branch_coverage=1 00:17:01.639 --rc genhtml_function_coverage=1 00:17:01.639 --rc genhtml_legend=1 00:17:01.639 --rc geninfo_all_blocks=1 00:17:01.639 --rc geninfo_unexecuted_blocks=1 00:17:01.639 00:17:01.639 ' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:01.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.639 --rc genhtml_branch_coverage=1 00:17:01.639 --rc genhtml_function_coverage=1 00:17:01.639 --rc genhtml_legend=1 00:17:01.639 --rc geninfo_all_blocks=1 00:17:01.639 --rc geninfo_unexecuted_blocks=1 00:17:01.639 00:17:01.639 ' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73915 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73915 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73915 ']' 00:17:01.639 20:00:45 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.639 20:00:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:01.639 [2024-09-30 20:00:45.884526] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:01.639 [2024-09-30 20:00:45.884976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:17:01.897 [2024-09-30 20:00:46.031439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.897 [2024-09-30 20:00:46.245630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.897 [2024-09-30 20:00:46.245904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.897 [2024-09-30 20:00:46.245938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.833 20:00:46 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.833 20:00:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:02.833 20:00:46 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:02.833 20:00:47 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:02.833 20:00:47 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:02.833 20:00:47 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:02.833 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:02.833 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.833 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:02.833 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:02.833 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:03.092 { 00:17:03.092 "name": "nvme0n1", 00:17:03.092 "aliases": [ 00:17:03.092 "0f7f6d7e-729a-49c2-8d6c-9897041ada4c" 00:17:03.092 ], 00:17:03.092 "product_name": "NVMe disk", 00:17:03.092 "block_size": 4096, 00:17:03.092 "num_blocks": 1310720, 00:17:03.092 "uuid": "0f7f6d7e-729a-49c2-8d6c-9897041ada4c", 00:17:03.092 "numa_id": -1, 00:17:03.092 "assigned_rate_limits": { 00:17:03.092 "rw_ios_per_sec": 0, 00:17:03.092 "rw_mbytes_per_sec": 0, 00:17:03.092 "r_mbytes_per_sec": 0, 00:17:03.092 "w_mbytes_per_sec": 0 00:17:03.092 }, 00:17:03.092 "claimed": true, 00:17:03.092 "claim_type": "read_many_write_one", 00:17:03.092 "zoned": false, 00:17:03.092 "supported_io_types": { 00:17:03.092 "read": true, 00:17:03.092 "write": true, 00:17:03.092 "unmap": true, 00:17:03.092 "flush": true, 00:17:03.092 "reset": true, 00:17:03.092 "nvme_admin": true, 00:17:03.092 "nvme_io": true, 00:17:03.092 "nvme_io_md": false, 00:17:03.092 "write_zeroes": true, 00:17:03.092 "zcopy": false, 00:17:03.092 "get_zone_info": false, 00:17:03.092 "zone_management": false, 00:17:03.092 "zone_append": false, 00:17:03.092 "compare": true, 00:17:03.092 "compare_and_write": false, 00:17:03.092 "abort": true, 00:17:03.092 "seek_hole": false, 00:17:03.092 "seek_data": false, 00:17:03.092 "copy": true, 00:17:03.092 "nvme_iov_md": false 00:17:03.092 }, 00:17:03.092 "driver_specific": { 00:17:03.092 "nvme": [ 00:17:03.092 { 00:17:03.092 "pci_address": "0000:00:11.0", 00:17:03.092 "trid": { 00:17:03.092 "trtype": "PCIe", 00:17:03.092 "traddr": "0000:00:11.0" 00:17:03.092 }, 00:17:03.092 "ctrlr_data": { 00:17:03.092 "cntlid": 0, 00:17:03.092 "vendor_id": "0x1b36", 00:17:03.092 "model_number": "QEMU NVMe Ctrl", 00:17:03.092 "serial_number": "12341", 00:17:03.092 "firmware_revision": "8.0.0", 00:17:03.092 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:03.092 "oacs": { 00:17:03.092 "security": 0, 00:17:03.092 "format": 1, 00:17:03.092 "firmware": 0, 00:17:03.092 "ns_manage": 1 00:17:03.092 }, 00:17:03.092 "multi_ctrlr": false, 00:17:03.092 "ana_reporting": false 00:17:03.092 }, 00:17:03.092 "vs": { 00:17:03.092 "nvme_version": "1.4" 00:17:03.092 }, 00:17:03.092 "ns_data": { 00:17:03.092 "id": 1, 00:17:03.092 "can_share": false 00:17:03.092 } 00:17:03.092 } 00:17:03.092 ], 00:17:03.092 "mp_policy": "active_passive" 00:17:03.092 } 00:17:03.092 } 00:17:03.092 ]' 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:03.092 20:00:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:03.092 20:00:47 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:03.092 20:00:47 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:03.092 20:00:47 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:03.092 20:00:47 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:03.350 20:00:47 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:03.350 20:00:47 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=47a9fc0c-cfab-4946-b85d-9dda46c1bbdd 00:17:03.350 20:00:47 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:03.350 20:00:47 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47a9fc0c-cfab-4946-b85d-9dda46c1bbdd 00:17:03.608 20:00:47 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:03.866 20:00:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e3b9884d-d856-432f-b114-b7f48fa29dae 00:17:03.866 20:00:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e3b9884d-d856-432f-b114-b7f48fa29dae 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:04.124 20:00:48 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:04.124 { 00:17:04.124 "name": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:04.124 "aliases": [ 00:17:04.124 "lvs/nvme0n1p0" 00:17:04.124 ], 00:17:04.124 "product_name": "Logical Volume", 00:17:04.124 "block_size": 4096, 00:17:04.124 "num_blocks": 26476544, 00:17:04.124 "uuid": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:04.124 "assigned_rate_limits": { 00:17:04.124 "rw_ios_per_sec": 0, 00:17:04.124 "rw_mbytes_per_sec": 0, 00:17:04.124 "r_mbytes_per_sec": 0, 00:17:04.124 "w_mbytes_per_sec": 0 00:17:04.124 }, 00:17:04.124 "claimed": false, 00:17:04.124 "zoned": false, 00:17:04.124 "supported_io_types": { 00:17:04.124 "read": true, 00:17:04.124 "write": true, 00:17:04.124 "unmap": true, 00:17:04.124 "flush": false, 00:17:04.124 "reset": true, 00:17:04.124 "nvme_admin": false, 00:17:04.124 "nvme_io": false, 00:17:04.124 "nvme_io_md": false, 00:17:04.124 "write_zeroes": true, 00:17:04.124 "zcopy": false, 00:17:04.124 "get_zone_info": false, 00:17:04.124 "zone_management": false, 00:17:04.124 "zone_append": false, 00:17:04.124 "compare": false, 00:17:04.124 "compare_and_write": false, 00:17:04.124 "abort": false, 00:17:04.124 "seek_hole": true, 00:17:04.124 "seek_data": true, 00:17:04.124 "copy": false, 00:17:04.124 "nvme_iov_md": false 00:17:04.124 }, 00:17:04.124 "driver_specific": { 00:17:04.124 "lvol": { 00:17:04.124 "lvol_store_uuid": "e3b9884d-d856-432f-b114-b7f48fa29dae", 00:17:04.124 "base_bdev": "nvme0n1", 00:17:04.124 "thin_provision": true, 00:17:04.124 "num_allocated_clusters": 0, 00:17:04.124 "snapshot": false, 00:17:04.124 "clone": false, 00:17:04.124 "esnap_clone": false 00:17:04.124 } 00:17:04.124 } 00:17:04.124 } 00:17:04.124 ]' 00:17:04.124 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:04.383 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:04.383 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:04.383 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:04.383 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:04.383 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:04.383 20:00:48 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:04.383 20:00:48 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:04.383 20:00:48 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:04.641 20:00:48 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:04.641 20:00:48 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:04.641 20:00:48 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:04.641 { 00:17:04.641 "name": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:04.641 "aliases": [ 00:17:04.641 "lvs/nvme0n1p0" 00:17:04.641 ], 00:17:04.641 "product_name": "Logical Volume", 00:17:04.641 "block_size": 4096, 00:17:04.641 "num_blocks": 26476544, 00:17:04.641 "uuid": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:04.641 "assigned_rate_limits": { 00:17:04.641 "rw_ios_per_sec": 0, 00:17:04.641 "rw_mbytes_per_sec": 0, 00:17:04.641 "r_mbytes_per_sec": 0, 00:17:04.641 "w_mbytes_per_sec": 0 00:17:04.641 }, 00:17:04.641 "claimed": false, 00:17:04.641 "zoned": false, 00:17:04.641 "supported_io_types": { 00:17:04.641 "read": true, 00:17:04.641 "write": true, 00:17:04.641 "unmap": true, 00:17:04.641 "flush": false, 00:17:04.641 "reset": true, 00:17:04.641 "nvme_admin": false, 00:17:04.641 "nvme_io": false, 00:17:04.641 "nvme_io_md": false, 00:17:04.641 "write_zeroes": true, 00:17:04.641 "zcopy": false, 00:17:04.641 "get_zone_info": false, 00:17:04.641 "zone_management": false, 00:17:04.641 "zone_append": false, 00:17:04.641 "compare": false, 00:17:04.641 "compare_and_write": false, 00:17:04.641 "abort": false, 00:17:04.641 "seek_hole": true, 00:17:04.641 "seek_data": true, 00:17:04.641 "copy": false, 00:17:04.641 "nvme_iov_md": false 00:17:04.641 }, 00:17:04.641 "driver_specific": { 00:17:04.641 "lvol": { 00:17:04.641 "lvol_store_uuid": "e3b9884d-d856-432f-b114-b7f48fa29dae", 00:17:04.641 "base_bdev": "nvme0n1", 00:17:04.641 "thin_provision": true, 00:17:04.641 "num_allocated_clusters": 0, 00:17:04.641 "snapshot": false, 00:17:04.641 "clone": false, 00:17:04.641 "esnap_clone": false 00:17:04.641 } 00:17:04.641 } 00:17:04.641 } 00:17:04.641 ]' 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:04.641 20:00:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:04.641 20:00:48 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:04.641 20:00:48 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:04.899 20:00:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:04.899 20:00:49 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:04.899 20:00:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.899 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:04.899 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:04.899 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:04.899 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:04.899 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2741f1-e1be-476e-8c88-e8fa84a89ede 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:05.157 { 00:17:05.157 "name": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:05.157 "aliases": [ 00:17:05.157 "lvs/nvme0n1p0" 00:17:05.157 ], 00:17:05.157 "product_name": "Logical Volume", 00:17:05.157 "block_size": 4096, 00:17:05.157 "num_blocks": 26476544, 00:17:05.157 "uuid": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:05.157 "assigned_rate_limits": { 00:17:05.157 "rw_ios_per_sec": 0, 00:17:05.157 "rw_mbytes_per_sec": 0, 00:17:05.157 "r_mbytes_per_sec": 0, 00:17:05.157 "w_mbytes_per_sec": 0 00:17:05.157 }, 00:17:05.157 "claimed": false, 00:17:05.157 "zoned": false, 00:17:05.157 "supported_io_types": { 00:17:05.157 "read": true, 00:17:05.157 "write": true, 00:17:05.157 "unmap": true, 00:17:05.157 "flush": false, 00:17:05.157 "reset": true, 00:17:05.157 "nvme_admin": false, 00:17:05.157 "nvme_io": false, 00:17:05.157 "nvme_io_md": false, 00:17:05.157 "write_zeroes": true, 00:17:05.157 "zcopy": false, 00:17:05.157 "get_zone_info": false, 00:17:05.157 "zone_management": false, 00:17:05.157 "zone_append": false, 00:17:05.157 "compare": false, 00:17:05.157 "compare_and_write": false, 00:17:05.157 "abort": false, 00:17:05.157 "seek_hole": true, 00:17:05.157 "seek_data": true, 00:17:05.157 "copy": false, 00:17:05.157 "nvme_iov_md": false 00:17:05.157 }, 00:17:05.157 "driver_specific": { 00:17:05.157 "lvol": { 00:17:05.157 "lvol_store_uuid": "e3b9884d-d856-432f-b114-b7f48fa29dae", 00:17:05.157 "base_bdev": "nvme0n1", 00:17:05.157 "thin_provision": true, 00:17:05.157 "num_allocated_clusters": 0, 00:17:05.157 "snapshot": false, 00:17:05.157 "clone": false, 00:17:05.157 "esnap_clone": false 00:17:05.157 } 00:17:05.157 } 00:17:05.157 } 00:17:05.157 ]' 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:05.157 20:00:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:05.157 20:00:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:05.157 20:00:49 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7a2741f1-e1be-476e-8c88-e8fa84a89ede -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:05.415 [2024-09-30 20:00:49.658051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.658106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.415 [2024-09-30 20:00:49.658121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.415 [2024-09-30 20:00:49.658131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.660559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.660591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.415 [2024-09-30 20:00:49.660601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:17:05.415 [2024-09-30 20:00:49.660607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.660716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.415 [2024-09-30 20:00:49.661348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.415 [2024-09-30 20:00:49.661376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.661383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.415 [2024-09-30 20:00:49.661392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:17:05.415 [2024-09-30 20:00:49.661400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.661546] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3f28407e-e46f-4786-a195-419638626b81 00:17:05.415 [2024-09-30 20:00:49.662873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.662906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:05.415 [2024-09-30 20:00:49.662915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:05.415 [2024-09-30 20:00:49.662925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.669987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.670018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.415 [2024-09-30 20:00:49.670026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:17:05.415 [2024-09-30 20:00:49.670035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.670160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.670172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.415 [2024-09-30 20:00:49.670179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:05.415 [2024-09-30 20:00:49.670191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.670226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.670235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.415 [2024-09-30 20:00:49.670241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.415 [2024-09-30 20:00:49.670249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.670291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.415 [2024-09-30 20:00:49.673627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.673656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.415 [2024-09-30 20:00:49.673666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:17:05.415 [2024-09-30 20:00:49.673672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.673722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.415 [2024-09-30 20:00:49.673730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.415 [2024-09-30 20:00:49.673740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:05.415 [2024-09-30 20:00:49.673748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.415 [2024-09-30 20:00:49.673778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:05.415 [2024-09-30 20:00:49.673898] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.415 [2024-09-30 20:00:49.673913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.415 [2024-09-30 20:00:49.673936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.415 [2024-09-30 20:00:49.673948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.416 [2024-09-30 20:00:49.673956] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.416 [2024-09-30 20:00:49.673965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.416 [2024-09-30 20:00:49.673970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.416 [2024-09-30 20:00:49.673978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.416 [2024-09-30 20:00:49.673984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.416 [2024-09-30 20:00:49.673992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.416 [2024-09-30 20:00:49.673999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.416 [2024-09-30 20:00:49.674006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:05.416 [2024-09-30 20:00:49.674012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.416 [2024-09-30 20:00:49.674088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.416 [2024-09-30 20:00:49.674098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.416 [2024-09-30 20:00:49.674106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:05.416 [2024-09-30 20:00:49.674111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.416 [2024-09-30 20:00:49.674225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.416 [2024-09-30 20:00:49.674241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.416 [2024-09-30 20:00:49.674251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.416 [2024-09-30 20:00:49.674282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.416 [2024-09-30 20:00:49.674303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.416 [2024-09-30 20:00:49.674315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.416 [2024-09-30 20:00:49.674321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.416 [2024-09-30 20:00:49.674328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.416 [2024-09-30 20:00:49.674333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.416 [2024-09-30 20:00:49.674340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.416 [2024-09-30 20:00:49.674345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.416 [2024-09-30 20:00:49.674359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.416 [2024-09-30 20:00:49.674377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.416 [2024-09-30 20:00:49.674397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.416 [2024-09-30 20:00:49.674418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.416 [2024-09-30 20:00:49.674436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.416 [2024-09-30 20:00:49.674456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.416 [2024-09-30 20:00:49.674469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.416 [2024-09-30 20:00:49.674474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.416 [2024-09-30 20:00:49.674480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.416 [2024-09-30 20:00:49.674485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.416 [2024-09-30 20:00:49.674492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.416 [2024-09-30 20:00:49.674497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.416 [2024-09-30 20:00:49.674509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.416 [2024-09-30 20:00:49.674516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.416 [2024-09-30 20:00:49.674529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.416 [2024-09-30 20:00:49.674536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.416 [2024-09-30 20:00:49.674550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.416 [2024-09-30 20:00:49.674559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.416 [2024-09-30 20:00:49.674564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.416 [2024-09-30 20:00:49.674571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.416 [2024-09-30 20:00:49.674575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.416 [2024-09-30 20:00:49.674582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.416 [2024-09-30 20:00:49.674590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.416 [2024-09-30 20:00:49.674599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.416 [2024-09-30 20:00:49.674617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.416 [2024-09-30 20:00:49.674623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.416 [2024-09-30 20:00:49.674631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.416 [2024-09-30 20:00:49.674637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.416 [2024-09-30 20:00:49.674643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.416 [2024-09-30 20:00:49.674649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.416 [2024-09-30 20:00:49.674656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.416 [2024-09-30 20:00:49.674663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.416 [2024-09-30 20:00:49.674671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.416 [2024-09-30 20:00:49.674702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.416 [2024-09-30 20:00:49.674710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.416 [2024-09-30 20:00:49.674725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.416 [2024-09-30 20:00:49.674730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.417 [2024-09-30 20:00:49.674737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.417 [2024-09-30 20:00:49.674744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.417 [2024-09-30 20:00:49.674751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.417 [2024-09-30 20:00:49.674757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:17:05.417 [2024-09-30 20:00:49.674764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.417 [2024-09-30 20:00:49.674849] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:05.417 [2024-09-30 20:00:49.674864] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:07.949 [2024-09-30 20:00:51.895976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.896051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:07.949 [2024-09-30 20:00:51.896067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2221.116 ms 00:17:07.949 [2024-09-30 20:00:51.896078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.945722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.945790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:07.949 [2024-09-30 20:00:51.945812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.379 ms 00:17:07.949 [2024-09-30 20:00:51.945823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.946003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.946018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:07.949 [2024-09-30 20:00:51.946027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:07.949 [2024-09-30 20:00:51.946038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.979933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.979981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:07.949 [2024-09-30 20:00:51.979994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.855 ms 00:17:07.949 [2024-09-30 20:00:51.980004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.980119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.980136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:07.949 [2024-09-30 20:00:51.980145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:07.949 [2024-09-30 20:00:51.980155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.980592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.980619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:07.949 [2024-09-30 20:00:51.980629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:07.949 [2024-09-30 20:00:51.980639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.980784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.980795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:07.949 [2024-09-30 20:00:51.980806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:07.949 [2024-09-30 20:00:51.980818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:51.996832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.949 [2024-09-30 20:00:51.996866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:07.949 [2024-09-30 20:00:51.996879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:17:07.949 [2024-09-30 20:00:51.996888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.949 [2024-09-30 20:00:52.009339] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:07.949 [2024-09-30 20:00:52.026799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.026843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:07.950 [2024-09-30 20:00:52.026858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.786 ms 00:17:07.950 [2024-09-30 20:00:52.026866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.093985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.094053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:07.950 [2024-09-30 20:00:52.094071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.013 ms 00:17:07.950 [2024-09-30 20:00:52.094081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.094320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.094333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:07.950 [2024-09-30 20:00:52.094351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:07.950 [2024-09-30 20:00:52.094359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.118537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.118605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:07.950 [2024-09-30 20:00:52.118620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.141 ms 00:17:07.950 [2024-09-30 20:00:52.118627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.141115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.141157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:07.950 [2024-09-30 20:00:52.141171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.431 ms 00:17:07.950 [2024-09-30 20:00:52.141178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.141780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.141810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:07.950 [2024-09-30 20:00:52.141823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:17:07.950 [2024-09-30 20:00:52.141830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.216839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.216892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:07.950 [2024-09-30 20:00:52.216912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.960 ms 00:17:07.950 [2024-09-30 20:00:52.216920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.241950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.241995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:07.950 [2024-09-30 20:00:52.242010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.920 ms 00:17:07.950 [2024-09-30 20:00:52.242018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.265107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.265147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:07.950 [2024-09-30 20:00:52.265160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.027 ms 00:17:07.950 [2024-09-30 20:00:52.265167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.289119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.289159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:07.950 [2024-09-30 20:00:52.289172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.869 ms 00:17:07.950 [2024-09-30 20:00:52.289179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.289246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.289256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:07.950 [2024-09-30 20:00:52.289282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:07.950 [2024-09-30 20:00:52.289304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.289396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.950 [2024-09-30 20:00:52.289406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:07.950 [2024-09-30 20:00:52.289418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:07.950 [2024-09-30 20:00:52.289425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.950 [2024-09-30 20:00:52.291649] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.950 [2024-09-30 20:00:52.299625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2632.681 ms, result 0 00:17:07.950 [2024-09-30 20:00:52.301050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.950 { 00:17:07.950 "name": "ftl0", 00:17:07.950 "uuid": "3f28407e-e46f-4786-a195-419638626b81" 00:17:07.950 } 00:17:08.209 20:00:52 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.209 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:08.467 [ 00:17:08.467 { 00:17:08.467 "name": "ftl0", 00:17:08.467 "aliases": [ 00:17:08.467 "3f28407e-e46f-4786-a195-419638626b81" 00:17:08.467 ], 00:17:08.467 "product_name": "FTL disk", 00:17:08.467 "block_size": 4096, 00:17:08.467 "num_blocks": 23592960, 00:17:08.467 "uuid": "3f28407e-e46f-4786-a195-419638626b81", 00:17:08.467 "assigned_rate_limits": { 00:17:08.467 "rw_ios_per_sec": 0, 00:17:08.467 "rw_mbytes_per_sec": 0, 00:17:08.467 "r_mbytes_per_sec": 0, 00:17:08.467 "w_mbytes_per_sec": 0 00:17:08.467 }, 00:17:08.467 "claimed": false, 00:17:08.467 "zoned": false, 00:17:08.467 "supported_io_types": { 00:17:08.467 "read": true, 00:17:08.467 "write": true, 00:17:08.467 "unmap": true, 00:17:08.467 "flush": true, 00:17:08.467 "reset": false, 00:17:08.467 "nvme_admin": false, 00:17:08.467 "nvme_io": false, 00:17:08.467 "nvme_io_md": false, 00:17:08.467 "write_zeroes": true, 00:17:08.467 "zcopy": false, 00:17:08.467 "get_zone_info": false, 00:17:08.467 "zone_management": false, 00:17:08.467 "zone_append": false, 00:17:08.467 "compare": false, 00:17:08.467 "compare_and_write": false, 00:17:08.467 "abort": false, 00:17:08.467 "seek_hole": false, 00:17:08.467 "seek_data": false, 00:17:08.467 "copy": false, 00:17:08.467 "nvme_iov_md": false 00:17:08.467 }, 00:17:08.467 "driver_specific": { 00:17:08.467 "ftl": { 00:17:08.467 "base_bdev": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:08.467 "cache": "nvc0n1p0" 00:17:08.467 } 00:17:08.467 } 00:17:08.467 } 00:17:08.467 ] 00:17:08.467 20:00:52 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:08.467 20:00:52 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:08.467 20:00:52 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:08.725 20:00:52 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:08.725 20:00:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:08.984 20:00:53 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:08.984 { 00:17:08.984 "name": "ftl0", 00:17:08.984 "aliases": [ 00:17:08.984 "3f28407e-e46f-4786-a195-419638626b81" 00:17:08.984 ], 00:17:08.984 "product_name": "FTL disk", 00:17:08.984 "block_size": 4096, 00:17:08.984 "num_blocks": 23592960, 00:17:08.984 "uuid": "3f28407e-e46f-4786-a195-419638626b81", 00:17:08.984 "assigned_rate_limits": { 00:17:08.984 "rw_ios_per_sec": 0, 00:17:08.984 "rw_mbytes_per_sec": 0, 00:17:08.984 "r_mbytes_per_sec": 0, 00:17:08.984 "w_mbytes_per_sec": 0 00:17:08.984 }, 00:17:08.984 "claimed": false, 00:17:08.984 "zoned": false, 00:17:08.984 "supported_io_types": { 00:17:08.984 "read": true, 00:17:08.984 "write": true, 00:17:08.984 "unmap": true, 00:17:08.984 "flush": true, 00:17:08.984 "reset": false, 00:17:08.984 "nvme_admin": false, 00:17:08.984 "nvme_io": false, 00:17:08.984 "nvme_io_md": false, 00:17:08.984 "write_zeroes": true, 00:17:08.984 "zcopy": false, 00:17:08.984 "get_zone_info": false, 00:17:08.984 "zone_management": false, 00:17:08.984 "zone_append": false, 00:17:08.984 "compare": false, 00:17:08.984 "compare_and_write": false, 00:17:08.984 "abort": false, 00:17:08.984 "seek_hole": false, 00:17:08.984 "seek_data": false, 00:17:08.984 "copy": false, 00:17:08.984 "nvme_iov_md": false 00:17:08.984 }, 00:17:08.984 "driver_specific": { 00:17:08.984 "ftl": { 00:17:08.984 "base_bdev": "7a2741f1-e1be-476e-8c88-e8fa84a89ede", 00:17:08.984 "cache": "nvc0n1p0" 00:17:08.984 } 00:17:08.984 } 00:17:08.984 } 00:17:08.984 ]' 00:17:08.984 20:00:53 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:08.984 20:00:53 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:08.984 20:00:53 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:09.244 [2024-09-30 20:00:53.357891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.357955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.244 [2024-09-30 20:00:53.357971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:09.244 [2024-09-30 20:00:53.357981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.358017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:09.244 [2024-09-30 20:00:53.360776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.360812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.244 [2024-09-30 20:00:53.360827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:17:09.244 [2024-09-30 20:00:53.360835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.361381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.361410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.244 [2024-09-30 20:00:53.361421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:17:09.244 [2024-09-30 20:00:53.361429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.365081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.365104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.244 [2024-09-30 20:00:53.365116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:17:09.244 [2024-09-30 20:00:53.365125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.372133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.372178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.244 [2024-09-30 20:00:53.372192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.945 ms 00:17:09.244 [2024-09-30 20:00:53.372200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.396976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.397017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.244 [2024-09-30 20:00:53.397034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.692 ms 00:17:09.244 [2024-09-30 20:00:53.397041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.412550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.412588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.244 [2024-09-30 20:00:53.412602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.441 ms 00:17:09.244 [2024-09-30 20:00:53.412610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.412824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.412836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.244 [2024-09-30 20:00:53.412846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:09.244 [2024-09-30 20:00:53.412853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.436155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.436197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.244 [2024-09-30 20:00:53.436211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.260 ms 00:17:09.244 [2024-09-30 20:00:53.436218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.458730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.458769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.244 [2024-09-30 20:00:53.458786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.427 ms 00:17:09.244 [2024-09-30 20:00:53.458794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.481100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.481135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.244 [2024-09-30 20:00:53.481148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.239 ms 00:17:09.244 [2024-09-30 20:00:53.481156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.503641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.244 [2024-09-30 20:00:53.503677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.244 [2024-09-30 20:00:53.503690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.352 ms 00:17:09.244 [2024-09-30 20:00:53.503697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.244 [2024-09-30 20:00:53.503757] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.244 [2024-09-30 20:00:53.503774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.503992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.504018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.244 [2024-09-30 20:00:53.504026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.245 [2024-09-30 20:00:53.504693] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.245 [2024-09-30 20:00:53.504705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:09.245 [2024-09-30 20:00:53.504714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.245 [2024-09-30 20:00:53.504723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.245 [2024-09-30 20:00:53.504730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.245 [2024-09-30 20:00:53.504739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.245 [2024-09-30 20:00:53.504747] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.245 [2024-09-30 20:00:53.504756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.245 [2024-09-30 20:00:53.504764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.245 [2024-09-30 20:00:53.504772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.245 [2024-09-30 20:00:53.504778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.245 [2024-09-30 20:00:53.504786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.245 [2024-09-30 20:00:53.504794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.245 [2024-09-30 20:00:53.504804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:17:09.245 [2024-09-30 20:00:53.504814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.245 [2024-09-30 20:00:53.517916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.245 [2024-09-30 20:00:53.517949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.245 [2024-09-30 20:00:53.517964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:17:09.245 [2024-09-30 20:00:53.517972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.245 [2024-09-30 20:00:53.518384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.245 [2024-09-30 20:00:53.518402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.245 [2024-09-30 20:00:53.518415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:09.246 [2024-09-30 20:00:53.518423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.246 [2024-09-30 20:00:53.564014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.246 [2024-09-30 20:00:53.564062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.246 [2024-09-30 20:00:53.564076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.246 [2024-09-30 20:00:53.564085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.246 [2024-09-30 20:00:53.564214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.246 [2024-09-30 20:00:53.564223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.246 [2024-09-30 20:00:53.564236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.246 [2024-09-30 20:00:53.564244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.246 [2024-09-30 20:00:53.564322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.246 [2024-09-30 20:00:53.564332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.246 [2024-09-30 20:00:53.564344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.246 [2024-09-30 20:00:53.564351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.246 [2024-09-30 20:00:53.564385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.246 [2024-09-30 20:00:53.564393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.246 [2024-09-30 20:00:53.564402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.246 [2024-09-30 20:00:53.564411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.504 [2024-09-30 20:00:53.635367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.504 [2024-09-30 20:00:53.635421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.504 [2024-09-30 20:00:53.635434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.504 [2024-09-30 20:00:53.635442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.686593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.686644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.505 [2024-09-30 20:00:53.686656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.686666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.686753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.686760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.505 [2024-09-30 20:00:53.686771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.686777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.686825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.686832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.505 [2024-09-30 20:00:53.686852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.686858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.686957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.686966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.505 [2024-09-30 20:00:53.686975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.686981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.687022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.687029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.505 [2024-09-30 20:00:53.687037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.687043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.687094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.687105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.505 [2024-09-30 20:00:53.687114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.687120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.687177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.505 [2024-09-30 20:00:53.687193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.505 [2024-09-30 20:00:53.687202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.505 [2024-09-30 20:00:53.687208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.505 [2024-09-30 20:00:53.687386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.487 ms, result 0 00:17:09.505 true 00:17:09.505 20:00:53 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73915 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73915 ']' 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73915 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73915 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.505 killing process with pid 73915 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73915' 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73915 00:17:09.505 20:00:53 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73915 00:17:16.133 20:00:59 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:16.392 65536+0 records in 00:17:16.392 65536+0 records out 00:17:16.392 268435456 bytes (268 MB, 256 MiB) copied, 1.01151 s, 265 MB/s 00:17:16.392 20:01:00 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.650 [2024-09-30 20:01:00.756743] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:16.650 [2024-09-30 20:01:00.756858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74101 ] 00:17:16.650 [2024-09-30 20:01:00.900459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.909 [2024-09-30 20:01:01.081955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.169 [2024-09-30 20:01:01.310681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.169 [2024-09-30 20:01:01.310745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.169 [2024-09-30 20:01:01.464982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.465043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.169 [2024-09-30 20:01:01.465058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.169 [2024-09-30 20:01:01.465065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.467347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.467377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.169 [2024-09-30 20:01:01.467386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:17:17.169 [2024-09-30 20:01:01.467394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.467458] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.169 [2024-09-30 20:01:01.467972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.169 [2024-09-30 20:01:01.467994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.468003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.169 [2024-09-30 20:01:01.468010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:17:17.169 [2024-09-30 20:01:01.468016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.469459] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:17.169 [2024-09-30 20:01:01.479483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.479513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:17.169 [2024-09-30 20:01:01.479524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.025 ms 00:17:17.169 [2024-09-30 20:01:01.479530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.479607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.479617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:17.169 [2024-09-30 20:01:01.479627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:17.169 [2024-09-30 20:01:01.479633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.485864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.485889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.169 [2024-09-30 20:01:01.485898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:17:17.169 [2024-09-30 20:01:01.485904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.485985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.485995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.169 [2024-09-30 20:01:01.486002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:17.169 [2024-09-30 20:01:01.486008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.486030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.486037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.169 [2024-09-30 20:01:01.486043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:17.169 [2024-09-30 20:01:01.486049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.486068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.169 [2024-09-30 20:01:01.489091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.489116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.169 [2024-09-30 20:01:01.489124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:17:17.169 [2024-09-30 20:01:01.489130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.489160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.169 [2024-09-30 20:01:01.489169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.169 [2024-09-30 20:01:01.489176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:17.169 [2024-09-30 20:01:01.489183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.169 [2024-09-30 20:01:01.489196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:17.169 [2024-09-30 20:01:01.489213] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:17.169 [2024-09-30 20:01:01.489241] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:17.170 [2024-09-30 20:01:01.489255] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:17.170 [2024-09-30 20:01:01.489350] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.170 [2024-09-30 20:01:01.489359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.170 [2024-09-30 20:01:01.489368] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:17.170 [2024-09-30 20:01:01.489376] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.170 [2024-09-30 20:01:01.489396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.170 [2024-09-30 20:01:01.489403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.170 [2024-09-30 20:01:01.489409] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.170 [2024-09-30 20:01:01.489416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.170 [2024-09-30 20:01:01.489424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.170 [2024-09-30 20:01:01.489431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:17:17.170 [2024-09-30 20:01:01.489436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.170 [2024-09-30 20:01:01.489503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.170 [2024-09-30 20:01:01.489509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.170 [2024-09-30 20:01:01.489515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:17.170 [2024-09-30 20:01:01.489521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.170 [2024-09-30 20:01:01.489595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.170 [2024-09-30 20:01:01.489609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.170 [2024-09-30 20:01:01.489618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.170 [2024-09-30 20:01:01.489640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.170 [2024-09-30 20:01:01.489658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.170 [2024-09-30 20:01:01.489669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.170 [2024-09-30 20:01:01.489680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.170 [2024-09-30 20:01:01.489685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.170 [2024-09-30 20:01:01.489690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.170 [2024-09-30 20:01:01.489696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.170 [2024-09-30 20:01:01.489701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.170 [2024-09-30 20:01:01.489711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.170 [2024-09-30 20:01:01.489726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.170 [2024-09-30 20:01:01.489742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.170 [2024-09-30 20:01:01.489758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.170 [2024-09-30 20:01:01.489774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.170 [2024-09-30 20:01:01.489790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.170 [2024-09-30 20:01:01.489800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.170 [2024-09-30 20:01:01.489805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.170 [2024-09-30 20:01:01.489811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.170 [2024-09-30 20:01:01.489817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.170 [2024-09-30 20:01:01.489829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.170 [2024-09-30 20:01:01.489834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.170 [2024-09-30 20:01:01.489846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.170 [2024-09-30 20:01:01.489851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489857] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.170 [2024-09-30 20:01:01.489863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.170 [2024-09-30 20:01:01.489868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.170 [2024-09-30 20:01:01.489880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.170 [2024-09-30 20:01:01.489886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.170 [2024-09-30 20:01:01.489891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.170 [2024-09-30 20:01:01.489897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.170 [2024-09-30 20:01:01.489903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.170 [2024-09-30 20:01:01.489908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.170 [2024-09-30 20:01:01.489915] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.170 [2024-09-30 20:01:01.489925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.489932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.170 [2024-09-30 20:01:01.489937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.170 [2024-09-30 20:01:01.489943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.170 [2024-09-30 20:01:01.489949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.170 [2024-09-30 20:01:01.489955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.170 [2024-09-30 20:01:01.489960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.170 [2024-09-30 20:01:01.489966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.170 [2024-09-30 20:01:01.489972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.170 [2024-09-30 20:01:01.489977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.170 [2024-09-30 20:01:01.489983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.489988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.489994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.490000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.490011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.170 [2024-09-30 20:01:01.490017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.170 [2024-09-30 20:01:01.490024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.490030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.170 [2024-09-30 20:01:01.490036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.170 [2024-09-30 20:01:01.490042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.170 [2024-09-30 20:01:01.490047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.170 [2024-09-30 20:01:01.490053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.170 [2024-09-30 20:01:01.490061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.170 [2024-09-30 20:01:01.490066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:17:17.170 [2024-09-30 20:01:01.490072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.170 [2024-09-30 20:01:01.530727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-09-30 20:01:01.530783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.171 [2024-09-30 20:01:01.530799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.596 ms 00:17:17.171 [2024-09-30 20:01:01.530809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-09-30 20:01:01.530993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-09-30 20:01:01.531009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.171 [2024-09-30 20:01:01.531020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:17.171 [2024-09-30 20:01:01.531030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.429 [2024-09-30 20:01:01.557708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.429 [2024-09-30 20:01:01.557746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.429 [2024-09-30 20:01:01.557756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.653 ms 00:17:17.429 [2024-09-30 20:01:01.557763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.429 [2024-09-30 20:01:01.557870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.429 [2024-09-30 20:01:01.557880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.429 [2024-09-30 20:01:01.557887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:17.429 [2024-09-30 20:01:01.557893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.429 [2024-09-30 20:01:01.558286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.429 [2024-09-30 20:01:01.558311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.429 [2024-09-30 20:01:01.558318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:17:17.429 [2024-09-30 20:01:01.558325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.429 [2024-09-30 20:01:01.558443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.429 [2024-09-30 20:01:01.558453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.429 [2024-09-30 20:01:01.558461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:17.429 [2024-09-30 20:01:01.558467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.569914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.569940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.430 [2024-09-30 20:01:01.569949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.430 ms 00:17:17.430 [2024-09-30 20:01:01.569955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.580208] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:17.430 [2024-09-30 20:01:01.580239] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:17.430 [2024-09-30 20:01:01.580253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.580260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:17.430 [2024-09-30 20:01:01.580280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.206 ms 00:17:17.430 [2024-09-30 20:01:01.580287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.598965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.598993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:17.430 [2024-09-30 20:01:01.599003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.616 ms 00:17:17.430 [2024-09-30 20:01:01.599014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.608136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.608165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:17.430 [2024-09-30 20:01:01.608173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.060 ms 00:17:17.430 [2024-09-30 20:01:01.608180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.616833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.616859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:17.430 [2024-09-30 20:01:01.616867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.607 ms 00:17:17.430 [2024-09-30 20:01:01.616873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.617389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.617411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:17.430 [2024-09-30 20:01:01.617420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:17:17.430 [2024-09-30 20:01:01.617427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.665213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.665265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:17.430 [2024-09-30 20:01:01.665285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.766 ms 00:17:17.430 [2024-09-30 20:01:01.665292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.673589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:17.430 [2024-09-30 20:01:01.688419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.688468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.430 [2024-09-30 20:01:01.688480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:17:17.430 [2024-09-30 20:01:01.688486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.688586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.688596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:17.430 [2024-09-30 20:01:01.688604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:17.430 [2024-09-30 20:01:01.688611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.688673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.688681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:17.430 [2024-09-30 20:01:01.688690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:17.430 [2024-09-30 20:01:01.688697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.688716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.688722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:17.430 [2024-09-30 20:01:01.688730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.430 [2024-09-30 20:01:01.688736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.688766] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:17.430 [2024-09-30 20:01:01.688774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.688781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:17.430 [2024-09-30 20:01:01.688788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:17.430 [2024-09-30 20:01:01.688796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.707706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.707738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:17.430 [2024-09-30 20:01:01.707748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.893 ms 00:17:17.430 [2024-09-30 20:01:01.707755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.707838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.430 [2024-09-30 20:01:01.707847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:17.430 [2024-09-30 20:01:01.707857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:17.430 [2024-09-30 20:01:01.707863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.430 [2024-09-30 20:01:01.708955] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:17.430 [2024-09-30 20:01:01.711386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.700 ms, result 0 00:17:17.430 [2024-09-30 20:01:01.712159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.430 [2024-09-30 20:01:01.723048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.733  Copying: 43/256 [MB] (43 MBps) Copying: 83/256 [MB] (39 MBps) Copying: 121/256 [MB] (38 MBps) Copying: 163/256 [MB] (41 MBps) Copying: 207/256 [MB] (43 MBps) Copying: 249/256 [MB] (41 MBps) Copying: 256/256 [MB] (average 41 MBps)[2024-09-30 20:01:07.903707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:23.733 [2024-09-30 20:01:07.913196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.733 [2024-09-30 20:01:07.913236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.733 [2024-09-30 20:01:07.913251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:23.733 [2024-09-30 20:01:07.913259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.733 [2024-09-30 20:01:07.913293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.733 [2024-09-30 20:01:07.916101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.733 [2024-09-30 20:01:07.916129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.733 [2024-09-30 20:01:07.916141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:17:23.733 [2024-09-30 20:01:07.916149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.733 [2024-09-30 20:01:07.917974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.918006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.734 [2024-09-30 20:01:07.918021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:17:23.734 [2024-09-30 20:01:07.918029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.924987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.925018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.734 [2024-09-30 20:01:07.925028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:17:23.734 [2024-09-30 20:01:07.925036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.932015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.932043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.734 [2024-09-30 20:01:07.932053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:17:23.734 [2024-09-30 20:01:07.932066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.955297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.955331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.734 [2024-09-30 20:01:07.955343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:17:23.734 [2024-09-30 20:01:07.955351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.969934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.969979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.734 [2024-09-30 20:01:07.969990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.549 ms 00:17:23.734 [2024-09-30 20:01:07.969998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.970154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.970176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.734 [2024-09-30 20:01:07.970185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:23.734 [2024-09-30 20:01:07.970193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:07.993495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:07.993534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.734 [2024-09-30 20:01:07.993543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.286 ms 00:17:23.734 [2024-09-30 20:01:07.993551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:08.016476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:08.016509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.734 [2024-09-30 20:01:08.016519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.891 ms 00:17:23.734 [2024-09-30 20:01:08.016526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:08.038801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:08.038832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.734 [2024-09-30 20:01:08.038842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.241 ms 00:17:23.734 [2024-09-30 20:01:08.038849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:08.061229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.734 [2024-09-30 20:01:08.061260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.734 [2024-09-30 20:01:08.061282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.319 ms 00:17:23.734 [2024-09-30 20:01:08.061289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.734 [2024-09-30 20:01:08.061321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.734 [2024-09-30 20:01:08.061337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.061993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.734 [2024-09-30 20:01:08.062065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.735 [2024-09-30 20:01:08.062136] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.735 [2024-09-30 20:01:08.062144] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:23.735 [2024-09-30 20:01:08.062152] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.735 [2024-09-30 20:01:08.062160] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.735 [2024-09-30 20:01:08.062168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.735 [2024-09-30 20:01:08.062176] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.735 [2024-09-30 20:01:08.062187] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.735 [2024-09-30 20:01:08.062194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.735 [2024-09-30 20:01:08.062201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.735 [2024-09-30 20:01:08.062208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.735 [2024-09-30 20:01:08.062214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.735 [2024-09-30 20:01:08.062222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.735 [2024-09-30 20:01:08.062230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.735 [2024-09-30 20:01:08.062239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:17:23.735 [2024-09-30 20:01:08.062246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.735 [2024-09-30 20:01:08.075179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.735 [2024-09-30 20:01:08.075210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.735 [2024-09-30 20:01:08.075224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:17:23.735 [2024-09-30 20:01:08.075233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.735 [2024-09-30 20:01:08.075632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.735 [2024-09-30 20:01:08.075649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.735 [2024-09-30 20:01:08.075658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:17:23.735 [2024-09-30 20:01:08.075667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.107208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.107250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.993 [2024-09-30 20:01:08.107261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.107280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.107363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.107372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.993 [2024-09-30 20:01:08.107380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.107387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.107431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.107441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.993 [2024-09-30 20:01:08.107453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.107461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.107479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.107488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.993 [2024-09-30 20:01:08.107496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.107504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.187725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.187779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.993 [2024-09-30 20:01:08.187798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.187807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.253736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.253792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.993 [2024-09-30 20:01:08.253803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.253812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.253883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.253893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.993 [2024-09-30 20:01:08.253901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.253913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.253945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.253953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.993 [2024-09-30 20:01:08.253962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.253969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.254062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.254072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.993 [2024-09-30 20:01:08.254080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.254088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.254124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.254133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.993 [2024-09-30 20:01:08.254141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.254149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.254191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.254199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.993 [2024-09-30 20:01:08.254207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.254214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.254264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.993 [2024-09-30 20:01:08.254293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.993 [2024-09-30 20:01:08.254301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.993 [2024-09-30 20:01:08.254309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.993 [2024-09-30 20:01:08.254460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.251 ms, result 0 00:17:24.923 00:17:24.923 00:17:24.923 20:01:09 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74199 00:17:24.923 20:01:09 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74199 00:17:24.923 20:01:09 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74199 ']' 00:17:24.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.923 20:01:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:25.180 [2024-09-30 20:01:09.357246] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:25.180 [2024-09-30 20:01:09.357613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74199 ] 00:17:25.180 [2024-09-30 20:01:09.509501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.437 [2024-09-30 20:01:09.718573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.003 20:01:10 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.003 20:01:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:26.003 20:01:10 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:26.261 [2024-09-30 20:01:10.567319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.261 [2024-09-30 20:01:10.567395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.521 [2024-09-30 20:01:10.740168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.740227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.521 [2024-09-30 20:01:10.740243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.521 [2024-09-30 20:01:10.740254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.743002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.743039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.521 [2024-09-30 20:01:10.743053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:17:26.521 [2024-09-30 20:01:10.743061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.743417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.521 [2024-09-30 20:01:10.744131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.521 [2024-09-30 20:01:10.744166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.744176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.521 [2024-09-30 20:01:10.744187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:17:26.521 [2024-09-30 20:01:10.744195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.745714] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.521 [2024-09-30 20:01:10.758506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.758544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.521 [2024-09-30 20:01:10.758556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.796 ms 00:17:26.521 [2024-09-30 20:01:10.758566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.758647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.758663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.521 [2024-09-30 20:01:10.758672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:26.521 [2024-09-30 20:01:10.758682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.765074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.765111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.521 [2024-09-30 20:01:10.765121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.343 ms 00:17:26.521 [2024-09-30 20:01:10.765131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.765235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.765248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.521 [2024-09-30 20:01:10.765257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:26.521 [2024-09-30 20:01:10.765277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.765304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.765316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.521 [2024-09-30 20:01:10.765324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.521 [2024-09-30 20:01:10.765333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.765357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.521 [2024-09-30 20:01:10.768828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.768857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.521 [2024-09-30 20:01:10.768868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.475 ms 00:17:26.521 [2024-09-30 20:01:10.768878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.768931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.521 [2024-09-30 20:01:10.768942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.521 [2024-09-30 20:01:10.768951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:26.521 [2024-09-30 20:01:10.768959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.521 [2024-09-30 20:01:10.768982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.521 [2024-09-30 20:01:10.769000] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.521 [2024-09-30 20:01:10.769043] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.522 [2024-09-30 20:01:10.769062] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:26.522 [2024-09-30 20:01:10.769170] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.522 [2024-09-30 20:01:10.769190] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.522 [2024-09-30 20:01:10.769203] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.522 [2024-09-30 20:01:10.769213] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769224] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769233] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.522 [2024-09-30 20:01:10.769244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.522 [2024-09-30 20:01:10.769251] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.522 [2024-09-30 20:01:10.769262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.522 [2024-09-30 20:01:10.769282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.522 [2024-09-30 20:01:10.769292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.522 [2024-09-30 20:01:10.769299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:17:26.522 [2024-09-30 20:01:10.769309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.522 [2024-09-30 20:01:10.769409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.522 [2024-09-30 20:01:10.769421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.522 [2024-09-30 20:01:10.769429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:26.522 [2024-09-30 20:01:10.769438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.522 [2024-09-30 20:01:10.769541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.522 [2024-09-30 20:01:10.769560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.522 [2024-09-30 20:01:10.769568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.522 [2024-09-30 20:01:10.769596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.522 [2024-09-30 20:01:10.769622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.522 [2024-09-30 20:01:10.769638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.522 [2024-09-30 20:01:10.769646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.522 [2024-09-30 20:01:10.769653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.522 [2024-09-30 20:01:10.769677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.522 [2024-09-30 20:01:10.769684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.522 [2024-09-30 20:01:10.769693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.522 [2024-09-30 20:01:10.769707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.522 [2024-09-30 20:01:10.769736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.522 [2024-09-30 20:01:10.769762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.522 [2024-09-30 20:01:10.769785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.522 [2024-09-30 20:01:10.769809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.522 [2024-09-30 20:01:10.769839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.522 [2024-09-30 20:01:10.769855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.522 [2024-09-30 20:01:10.769863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.522 [2024-09-30 20:01:10.769869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.522 [2024-09-30 20:01:10.769879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.522 [2024-09-30 20:01:10.769886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.522 [2024-09-30 20:01:10.769895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.522 [2024-09-30 20:01:10.769910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.522 [2024-09-30 20:01:10.769917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769924] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.522 [2024-09-30 20:01:10.769933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.522 [2024-09-30 20:01:10.769942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.522 [2024-09-30 20:01:10.769949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.522 [2024-09-30 20:01:10.769959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.522 [2024-09-30 20:01:10.769966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.522 [2024-09-30 20:01:10.769974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.522 [2024-09-30 20:01:10.769981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.522 [2024-09-30 20:01:10.769989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.522 [2024-09-30 20:01:10.769996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.522 [2024-09-30 20:01:10.770006] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.522 [2024-09-30 20:01:10.770018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.522 [2024-09-30 20:01:10.770038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.522 [2024-09-30 20:01:10.770049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.522 [2024-09-30 20:01:10.770057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.522 [2024-09-30 20:01:10.770066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.522 [2024-09-30 20:01:10.770073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.522 [2024-09-30 20:01:10.770082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.522 [2024-09-30 20:01:10.770089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.522 [2024-09-30 20:01:10.770098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.522 [2024-09-30 20:01:10.770105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.522 [2024-09-30 20:01:10.770145] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.522 [2024-09-30 20:01:10.770154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.522 [2024-09-30 20:01:10.770174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.522 [2024-09-30 20:01:10.770183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.522 [2024-09-30 20:01:10.770191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.522 [2024-09-30 20:01:10.770200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.522 [2024-09-30 20:01:10.770207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.522 [2024-09-30 20:01:10.770216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:17:26.522 [2024-09-30 20:01:10.770223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.522 [2024-09-30 20:01:10.799043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.522 [2024-09-30 20:01:10.799081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.522 [2024-09-30 20:01:10.799094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.742 ms 00:17:26.523 [2024-09-30 20:01:10.799103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.799227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.799236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.523 [2024-09-30 20:01:10.799246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:26.523 [2024-09-30 20:01:10.799254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.838951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.838993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.523 [2024-09-30 20:01:10.839009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.658 ms 00:17:26.523 [2024-09-30 20:01:10.839017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.839117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.839128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.523 [2024-09-30 20:01:10.839141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.523 [2024-09-30 20:01:10.839149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.839579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.839602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.523 [2024-09-30 20:01:10.839614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:26.523 [2024-09-30 20:01:10.839622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.839760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.839770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.523 [2024-09-30 20:01:10.839780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:26.523 [2024-09-30 20:01:10.839790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.854683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.854714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.523 [2024-09-30 20:01:10.854726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.870 ms 00:17:26.523 [2024-09-30 20:01:10.854736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.523 [2024-09-30 20:01:10.867414] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:26.523 [2024-09-30 20:01:10.867450] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.523 [2024-09-30 20:01:10.867464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.523 [2024-09-30 20:01:10.867472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.523 [2024-09-30 20:01:10.867484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.625 ms 00:17:26.523 [2024-09-30 20:01:10.867491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.891712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:10.891765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.781 [2024-09-30 20:01:10.891777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.146 ms 00:17:26.781 [2024-09-30 20:01:10.891791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.903172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:10.903200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.781 [2024-09-30 20:01:10.903214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.306 ms 00:17:26.781 [2024-09-30 20:01:10.903222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.914175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:10.914204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.781 [2024-09-30 20:01:10.914217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.862 ms 00:17:26.781 [2024-09-30 20:01:10.914224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.915247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:10.915305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.781 [2024-09-30 20:01:10.915320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:26.781 [2024-09-30 20:01:10.915330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.973664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:10.973726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.781 [2024-09-30 20:01:10.973743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.305 ms 00:17:26.781 [2024-09-30 20:01:10.973754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:10.984423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.781 [2024-09-30 20:01:11.000892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:11.000943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.781 [2024-09-30 20:01:11.000956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.025 ms 00:17:26.781 [2024-09-30 20:01:11.000966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:11.001070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:11.001083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.781 [2024-09-30 20:01:11.001092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:26.781 [2024-09-30 20:01:11.001102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:11.001161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:11.001172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.781 [2024-09-30 20:01:11.001180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:26.781 [2024-09-30 20:01:11.001189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:11.001216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:11.001226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.781 [2024-09-30 20:01:11.001238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.781 [2024-09-30 20:01:11.001250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.781 [2024-09-30 20:01:11.001300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.781 [2024-09-30 20:01:11.001316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.781 [2024-09-30 20:01:11.001324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.782 [2024-09-30 20:01:11.001334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:26.782 [2024-09-30 20:01:11.001341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.782 [2024-09-30 20:01:11.025478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.782 [2024-09-30 20:01:11.025514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.782 [2024-09-30 20:01:11.025528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.110 ms 00:17:26.782 [2024-09-30 20:01:11.025538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.782 [2024-09-30 20:01:11.025633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.782 [2024-09-30 20:01:11.025644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.782 [2024-09-30 20:01:11.025654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:26.782 [2024-09-30 20:01:11.025663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.782 [2024-09-30 20:01:11.026918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.782 [2024-09-30 20:01:11.029999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 286.428 ms, result 0 00:17:26.782 [2024-09-30 20:01:11.031011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.782 Some configs were skipped because the RPC state that can call them passed over. 00:17:26.782 20:01:11 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:27.040 [2024-09-30 20:01:11.269605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.040 [2024-09-30 20:01:11.269669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:27.040 [2024-09-30 20:01:11.269683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:17:27.040 [2024-09-30 20:01:11.269693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.040 [2024-09-30 20:01:11.269728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.823 ms, result 0 00:17:27.040 true 00:17:27.040 20:01:11 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:27.298 [2024-09-30 20:01:11.477638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.298 [2024-09-30 20:01:11.477690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:27.298 [2024-09-30 20:01:11.477704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:17:27.298 [2024-09-30 20:01:11.477713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.298 [2024-09-30 20:01:11.477751] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.627 ms, result 0 00:17:27.298 true 00:17:27.298 20:01:11 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74199 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74199 ']' 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74199 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74199 00:17:27.298 killing process with pid 74199 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74199' 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74199 00:17:27.298 20:01:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74199 00:17:27.866 [2024-09-30 20:01:12.167617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.167683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.866 [2024-09-30 20:01:12.167694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:27.866 [2024-09-30 20:01:12.167702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.167720] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.866 [2024-09-30 20:01:12.169869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.169898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.866 [2024-09-30 20:01:12.169910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:17:27.866 [2024-09-30 20:01:12.169917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.170152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.170160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.866 [2024-09-30 20:01:12.170170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:27.866 [2024-09-30 20:01:12.170177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.173681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.173707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.866 [2024-09-30 20:01:12.173716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:17:27.866 [2024-09-30 20:01:12.173723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.178990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.179016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:27.866 [2024-09-30 20:01:12.179028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:17:27.866 [2024-09-30 20:01:12.179036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.186786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.186812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.866 [2024-09-30 20:01:12.186823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.701 ms 00:17:27.866 [2024-09-30 20:01:12.186829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.193625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.193652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.866 [2024-09-30 20:01:12.193663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:17:27.866 [2024-09-30 20:01:12.193676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.193788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.193796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.866 [2024-09-30 20:01:12.193805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:27.866 [2024-09-30 20:01:12.193812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.201681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.201708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:27.866 [2024-09-30 20:01:12.201717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.849 ms 00:17:27.866 [2024-09-30 20:01:12.201723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.209244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.209280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:27.866 [2024-09-30 20:01:12.209291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.488 ms 00:17:27.866 [2024-09-30 20:01:12.209297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.216394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.216419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.866 [2024-09-30 20:01:12.216427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.066 ms 00:17:27.866 [2024-09-30 20:01:12.216433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.223745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.866 [2024-09-30 20:01:12.223771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.866 [2024-09-30 20:01:12.223779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:17:27.866 [2024-09-30 20:01:12.223785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.866 [2024-09-30 20:01:12.223820] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.866 [2024-09-30 20:01:12.223833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.866 [2024-09-30 20:01:12.223952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.223997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.867 [2024-09-30 20:01:12.224518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.867 [2024-09-30 20:01:12.224527] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:27.867 [2024-09-30 20:01:12.224533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.867 [2024-09-30 20:01:12.224540] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.867 [2024-09-30 20:01:12.224546] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.868 [2024-09-30 20:01:12.224553] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.868 [2024-09-30 20:01:12.224562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.868 [2024-09-30 20:01:12.224570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.868 [2024-09-30 20:01:12.224577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.868 [2024-09-30 20:01:12.224583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.868 [2024-09-30 20:01:12.224588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.868 [2024-09-30 20:01:12.224594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.868 [2024-09-30 20:01:12.224600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.868 [2024-09-30 20:01:12.224608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:17:27.868 [2024-09-30 20:01:12.224614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.234535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.127 [2024-09-30 20:01:12.234560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.127 [2024-09-30 20:01:12.234572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.903 ms 00:17:28.127 [2024-09-30 20:01:12.234578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.234894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.127 [2024-09-30 20:01:12.234902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.127 [2024-09-30 20:01:12.234910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:28.127 [2024-09-30 20:01:12.234917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.267911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.267945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.127 [2024-09-30 20:01:12.267955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.267964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.268074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.268082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.127 [2024-09-30 20:01:12.268090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.268096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.268136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.268143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.127 [2024-09-30 20:01:12.268153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.268160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.268178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.268186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.127 [2024-09-30 20:01:12.268194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.268199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.331395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.331444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.127 [2024-09-30 20:01:12.331457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.331466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.383785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.383835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.127 [2024-09-30 20:01:12.383847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.383854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.383944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.383953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.127 [2024-09-30 20:01:12.383963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.383970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.383998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.384006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.127 [2024-09-30 20:01:12.384014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.384021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.384097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.384104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.127 [2024-09-30 20:01:12.384113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.384120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.384151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.384158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.127 [2024-09-30 20:01:12.384168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.384174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.384211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.384218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.127 [2024-09-30 20:01:12.384227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.384233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.384294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.127 [2024-09-30 20:01:12.384304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.127 [2024-09-30 20:01:12.384312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.127 [2024-09-30 20:01:12.384318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.127 [2024-09-30 20:01:12.384438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.802 ms, result 0 00:17:28.694 20:01:13 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:28.694 20:01:13 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.952 [2024-09-30 20:01:13.092261] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:28.952 [2024-09-30 20:01:13.092414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74246 ] 00:17:28.952 [2024-09-30 20:01:13.238425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.210 [2024-09-30 20:01:13.419573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.469 [2024-09-30 20:01:13.646587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.469 [2024-09-30 20:01:13.646650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.469 [2024-09-30 20:01:13.800370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.800412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.469 [2024-09-30 20:01:13.800427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.469 [2024-09-30 20:01:13.800433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.802608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.802639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.469 [2024-09-30 20:01:13.802648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:17:29.469 [2024-09-30 20:01:13.802655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.802714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.469 [2024-09-30 20:01:13.803281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.469 [2024-09-30 20:01:13.803305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.803314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.469 [2024-09-30 20:01:13.803321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:17:29.469 [2024-09-30 20:01:13.803327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.804610] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:29.469 [2024-09-30 20:01:13.814514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.814544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:29.469 [2024-09-30 20:01:13.814553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.905 ms 00:17:29.469 [2024-09-30 20:01:13.814559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.814633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.814642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:29.469 [2024-09-30 20:01:13.814651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:29.469 [2024-09-30 20:01:13.814657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.820821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.820847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.469 [2024-09-30 20:01:13.820855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:17:29.469 [2024-09-30 20:01:13.820861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.820937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.820947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.469 [2024-09-30 20:01:13.820954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:29.469 [2024-09-30 20:01:13.820960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.820978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.820984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.469 [2024-09-30 20:01:13.820990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.469 [2024-09-30 20:01:13.820996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.821017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:29.469 [2024-09-30 20:01:13.823957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.823981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.469 [2024-09-30 20:01:13.823989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:17:29.469 [2024-09-30 20:01:13.823995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.824024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.824034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.469 [2024-09-30 20:01:13.824041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:29.469 [2024-09-30 20:01:13.824047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.824061] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:29.469 [2024-09-30 20:01:13.824077] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:29.469 [2024-09-30 20:01:13.824105] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:29.469 [2024-09-30 20:01:13.824117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:29.469 [2024-09-30 20:01:13.824201] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.469 [2024-09-30 20:01:13.824209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.469 [2024-09-30 20:01:13.824218] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.469 [2024-09-30 20:01:13.824226] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824233] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:29.469 [2024-09-30 20:01:13.824246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.469 [2024-09-30 20:01:13.824252] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.469 [2024-09-30 20:01:13.824259] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.469 [2024-09-30 20:01:13.824275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.824284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.469 [2024-09-30 20:01:13.824290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:29.469 [2024-09-30 20:01:13.824296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.824363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.469 [2024-09-30 20:01:13.824370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.469 [2024-09-30 20:01:13.824377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:29.469 [2024-09-30 20:01:13.824383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.469 [2024-09-30 20:01:13.824456] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.469 [2024-09-30 20:01:13.824463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.469 [2024-09-30 20:01:13.824472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.469 [2024-09-30 20:01:13.824490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.469 [2024-09-30 20:01:13.824507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.469 [2024-09-30 20:01:13.824517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.469 [2024-09-30 20:01:13.824528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:29.469 [2024-09-30 20:01:13.824533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.469 [2024-09-30 20:01:13.824538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.469 [2024-09-30 20:01:13.824543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:29.469 [2024-09-30 20:01:13.824548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.469 [2024-09-30 20:01:13.824560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.469 [2024-09-30 20:01:13.824576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.469 [2024-09-30 20:01:13.824591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:29.469 [2024-09-30 20:01:13.824596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.469 [2024-09-30 20:01:13.824602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.469 [2024-09-30 20:01:13.824607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.470 [2024-09-30 20:01:13.824617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.470 [2024-09-30 20:01:13.824622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.470 [2024-09-30 20:01:13.824632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.470 [2024-09-30 20:01:13.824637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.470 [2024-09-30 20:01:13.824647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.470 [2024-09-30 20:01:13.824652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:29.470 [2024-09-30 20:01:13.824657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.470 [2024-09-30 20:01:13.824662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.470 [2024-09-30 20:01:13.824668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:29.470 [2024-09-30 20:01:13.824672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.470 [2024-09-30 20:01:13.824683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:29.470 [2024-09-30 20:01:13.824688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824693] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.470 [2024-09-30 20:01:13.824699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.470 [2024-09-30 20:01:13.824705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.470 [2024-09-30 20:01:13.824711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.470 [2024-09-30 20:01:13.824717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.470 [2024-09-30 20:01:13.824724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.470 [2024-09-30 20:01:13.824730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.470 [2024-09-30 20:01:13.824735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.470 [2024-09-30 20:01:13.824740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.470 [2024-09-30 20:01:13.824746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.470 [2024-09-30 20:01:13.824753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.470 [2024-09-30 20:01:13.824763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:29.470 [2024-09-30 20:01:13.824775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:29.470 [2024-09-30 20:01:13.824780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:29.470 [2024-09-30 20:01:13.824787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:29.470 [2024-09-30 20:01:13.824792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:29.470 [2024-09-30 20:01:13.824798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:29.470 [2024-09-30 20:01:13.824804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:29.470 [2024-09-30 20:01:13.824810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:29.470 [2024-09-30 20:01:13.824815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:29.470 [2024-09-30 20:01:13.824821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:29.470 [2024-09-30 20:01:13.824849] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.470 [2024-09-30 20:01:13.824855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.470 [2024-09-30 20:01:13.824868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.470 [2024-09-30 20:01:13.824873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.470 [2024-09-30 20:01:13.824879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.470 [2024-09-30 20:01:13.824885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.470 [2024-09-30 20:01:13.824893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.470 [2024-09-30 20:01:13.824898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:17:29.470 [2024-09-30 20:01:13.824904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.729 [2024-09-30 20:01:13.860724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.729 [2024-09-30 20:01:13.860763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.730 [2024-09-30 20:01:13.860774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.760 ms 00:17:29.730 [2024-09-30 20:01:13.860780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.860892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.860902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.730 [2024-09-30 20:01:13.860909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:29.730 [2024-09-30 20:01:13.860915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.887240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.887276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.730 [2024-09-30 20:01:13.887284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.308 ms 00:17:29.730 [2024-09-30 20:01:13.887290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.887343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.887351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.730 [2024-09-30 20:01:13.887358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.730 [2024-09-30 20:01:13.887363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.887748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.887761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.730 [2024-09-30 20:01:13.887770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:29.730 [2024-09-30 20:01:13.887776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.887892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.887914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.730 [2024-09-30 20:01:13.887921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:29.730 [2024-09-30 20:01:13.887928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.899418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.899443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.730 [2024-09-30 20:01:13.899451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.473 ms 00:17:29.730 [2024-09-30 20:01:13.899457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.909536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:29.730 [2024-09-30 20:01:13.909568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:29.730 [2024-09-30 20:01:13.909578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.909585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:29.730 [2024-09-30 20:01:13.909592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.016 ms 00:17:29.730 [2024-09-30 20:01:13.909599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.928988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.929018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:29.730 [2024-09-30 20:01:13.929030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:17:29.730 [2024-09-30 20:01:13.929036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.937961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.937989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:29.730 [2024-09-30 20:01:13.937997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.869 ms 00:17:29.730 [2024-09-30 20:01:13.938003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.946632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.946657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:29.730 [2024-09-30 20:01:13.946665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.586 ms 00:17:29.730 [2024-09-30 20:01:13.946670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.947135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.947157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.730 [2024-09-30 20:01:13.947165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:29.730 [2024-09-30 20:01:13.947171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:13.994549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:13.994597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:29.730 [2024-09-30 20:01:13.994609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.358 ms 00:17:29.730 [2024-09-30 20:01:13.994616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.002782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.730 [2024-09-30 20:01:14.017369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.017401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.730 [2024-09-30 20:01:14.017412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.670 ms 00:17:29.730 [2024-09-30 20:01:14.017418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.017517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.017526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:29.730 [2024-09-30 20:01:14.017533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:29.730 [2024-09-30 20:01:14.017540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.017593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.017602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.730 [2024-09-30 20:01:14.017608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:29.730 [2024-09-30 20:01:14.017615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.017633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.017639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.730 [2024-09-30 20:01:14.017646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:29.730 [2024-09-30 20:01:14.017652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.017683] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:29.730 [2024-09-30 20:01:14.017691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.017699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:29.730 [2024-09-30 20:01:14.017705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:29.730 [2024-09-30 20:01:14.017711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.036630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.036660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.730 [2024-09-30 20:01:14.036670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:17:29.730 [2024-09-30 20:01:14.036677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.036755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.730 [2024-09-30 20:01:14.036763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.730 [2024-09-30 20:01:14.036770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:29.730 [2024-09-30 20:01:14.036777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.730 [2024-09-30 20:01:14.037874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.730 [2024-09-30 20:01:14.040151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.205 ms, result 0 00:17:29.730 [2024-09-30 20:01:14.040936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.730 [2024-09-30 20:01:14.051789] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.785  Copying: 47/256 [MB] (47 MBps) Copying: 91/256 [MB] (44 MBps) Copying: 136/256 [MB] (44 MBps) Copying: 180/256 [MB] (44 MBps) Copying: 223/256 [MB] (42 MBps) Copying: 256/256 [MB] (average 44 MBps)[2024-09-30 20:01:19.836105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.785 [2024-09-30 20:01:19.846027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.846064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.785 [2024-09-30 20:01:19.846078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:35.785 [2024-09-30 20:01:19.846087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.846109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.785 [2024-09-30 20:01:19.848890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.848921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.785 [2024-09-30 20:01:19.848933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:17:35.785 [2024-09-30 20:01:19.848941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.849197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.849220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.785 [2024-09-30 20:01:19.849229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:35.785 [2024-09-30 20:01:19.849237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.852934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.852958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.785 [2024-09-30 20:01:19.852967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:17:35.785 [2024-09-30 20:01:19.852976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.859912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.859938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.785 [2024-09-30 20:01:19.859953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:17:35.785 [2024-09-30 20:01:19.859961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.882999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.883031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.785 [2024-09-30 20:01:19.883043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.985 ms 00:17:35.785 [2024-09-30 20:01:19.883050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.897097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.897130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.785 [2024-09-30 20:01:19.897141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.026 ms 00:17:35.785 [2024-09-30 20:01:19.897149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.897294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.897305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.785 [2024-09-30 20:01:19.897314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:35.785 [2024-09-30 20:01:19.897322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.920749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.920781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.785 [2024-09-30 20:01:19.920791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.406 ms 00:17:35.785 [2024-09-30 20:01:19.920798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.942949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.942979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.785 [2024-09-30 20:01:19.942988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.108 ms 00:17:35.785 [2024-09-30 20:01:19.942996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.965618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.965648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.785 [2024-09-30 20:01:19.965658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.602 ms 00:17:35.785 [2024-09-30 20:01:19.965665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.988333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.785 [2024-09-30 20:01:19.988362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.785 [2024-09-30 20:01:19.988372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.619 ms 00:17:35.785 [2024-09-30 20:01:19.988378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.785 [2024-09-30 20:01:19.988398] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.785 [2024-09-30 20:01:19.988412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.785 [2024-09-30 20:01:19.988485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.988998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.786 [2024-09-30 20:01:19.989157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.787 [2024-09-30 20:01:19.989172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.787 [2024-09-30 20:01:19.989187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.787 [2024-09-30 20:01:19.989195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:35.787 [2024-09-30 20:01:19.989203] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.787 [2024-09-30 20:01:19.989210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.787 [2024-09-30 20:01:19.989217] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.787 [2024-09-30 20:01:19.989228] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.787 [2024-09-30 20:01:19.989235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.787 [2024-09-30 20:01:19.989243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.787 [2024-09-30 20:01:19.989251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.787 [2024-09-30 20:01:19.989257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.787 [2024-09-30 20:01:19.989263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.787 [2024-09-30 20:01:19.989281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.787 [2024-09-30 20:01:19.989289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.787 [2024-09-30 20:01:19.989298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:17:35.787 [2024-09-30 20:01:19.989306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.002159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.787 [2024-09-30 20:01:20.002192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.787 [2024-09-30 20:01:20.002203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.826 ms 00:17:35.787 [2024-09-30 20:01:20.002212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.002617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.787 [2024-09-30 20:01:20.002636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.787 [2024-09-30 20:01:20.002645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:35.787 [2024-09-30 20:01:20.002653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.034724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.787 [2024-09-30 20:01:20.034759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.787 [2024-09-30 20:01:20.034770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.787 [2024-09-30 20:01:20.034778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.034863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.787 [2024-09-30 20:01:20.034874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.787 [2024-09-30 20:01:20.034881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.787 [2024-09-30 20:01:20.034889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.034930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.787 [2024-09-30 20:01:20.034943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.787 [2024-09-30 20:01:20.034951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.787 [2024-09-30 20:01:20.034958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.034977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.787 [2024-09-30 20:01:20.034985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.787 [2024-09-30 20:01:20.034992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.787 [2024-09-30 20:01:20.035000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.787 [2024-09-30 20:01:20.116349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.787 [2024-09-30 20:01:20.116399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.787 [2024-09-30 20:01:20.116411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.787 [2024-09-30 20:01:20.116419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.045 [2024-09-30 20:01:20.182908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.045 [2024-09-30 20:01:20.182956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.045 [2024-09-30 20:01:20.182969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.045 [2024-09-30 20:01:20.182977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.045 [2024-09-30 20:01:20.183052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.045 [2024-09-30 20:01:20.183062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.045 [2024-09-30 20:01:20.183075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.045 [2024-09-30 20:01:20.183083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.045 [2024-09-30 20:01:20.183113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.045 [2024-09-30 20:01:20.183121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.046 [2024-09-30 20:01:20.183129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.046 [2024-09-30 20:01:20.183137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.046 [2024-09-30 20:01:20.183229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.046 [2024-09-30 20:01:20.183239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.046 [2024-09-30 20:01:20.183248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.046 [2024-09-30 20:01:20.183258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.046 [2024-09-30 20:01:20.183309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.046 [2024-09-30 20:01:20.183321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:36.046 [2024-09-30 20:01:20.183329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.046 [2024-09-30 20:01:20.183338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.046 [2024-09-30 20:01:20.183378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.046 [2024-09-30 20:01:20.183387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.046 [2024-09-30 20:01:20.183395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.046 [2024-09-30 20:01:20.183405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.046 [2024-09-30 20:01:20.183452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.046 [2024-09-30 20:01:20.183462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.046 [2024-09-30 20:01:20.183472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.046 [2024-09-30 20:01:20.183479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.046 [2024-09-30 20:01:20.183624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.581 ms, result 0 00:17:36.981 00:17:36.981 00:17:36.981 20:01:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:36.981 20:01:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:37.239 20:01:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.497 [2024-09-30 20:01:21.650417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:37.497 [2024-09-30 20:01:21.650551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74347 ] 00:17:37.497 [2024-09-30 20:01:21.803593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.755 [2024-09-30 20:01:22.006938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.013 [2024-09-30 20:01:22.278059] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.013 [2024-09-30 20:01:22.278131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.272 [2024-09-30 20:01:22.436465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.436512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:38.272 [2024-09-30 20:01:22.436529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.272 [2024-09-30 20:01:22.436537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.439281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.439317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.272 [2024-09-30 20:01:22.439326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:17:38.272 [2024-09-30 20:01:22.439336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.439405] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:38.272 [2024-09-30 20:01:22.440054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:38.272 [2024-09-30 20:01:22.440082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.440093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.272 [2024-09-30 20:01:22.440101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:17:38.272 [2024-09-30 20:01:22.440110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.441574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:38.272 [2024-09-30 20:01:22.454554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.454588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:38.272 [2024-09-30 20:01:22.454599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.981 ms 00:17:38.272 [2024-09-30 20:01:22.454607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.454694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.454705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:38.272 [2024-09-30 20:01:22.454717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:38.272 [2024-09-30 20:01:22.454724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.461279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.461310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.272 [2024-09-30 20:01:22.461320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.513 ms 00:17:38.272 [2024-09-30 20:01:22.461328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.461419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.461431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.272 [2024-09-30 20:01:22.461440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:38.272 [2024-09-30 20:01:22.461448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.461472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.461481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:38.272 [2024-09-30 20:01:22.461489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:38.272 [2024-09-30 20:01:22.461496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.461517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:38.272 [2024-09-30 20:01:22.465107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.465137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.272 [2024-09-30 20:01:22.465146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:17:38.272 [2024-09-30 20:01:22.465155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.465206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.465219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:38.272 [2024-09-30 20:01:22.465228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:38.272 [2024-09-30 20:01:22.465236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.465255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:38.272 [2024-09-30 20:01:22.465284] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:38.272 [2024-09-30 20:01:22.465321] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:38.272 [2024-09-30 20:01:22.465337] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:38.272 [2024-09-30 20:01:22.465443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:38.272 [2024-09-30 20:01:22.465454] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:38.272 [2024-09-30 20:01:22.465465] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:38.272 [2024-09-30 20:01:22.465475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:38.272 [2024-09-30 20:01:22.465484] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:38.272 [2024-09-30 20:01:22.465492] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:38.272 [2024-09-30 20:01:22.465499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:38.272 [2024-09-30 20:01:22.465506] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:38.272 [2024-09-30 20:01:22.465514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:38.272 [2024-09-30 20:01:22.465521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.465531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:38.272 [2024-09-30 20:01:22.465539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:38.272 [2024-09-30 20:01:22.465546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.465646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.272 [2024-09-30 20:01:22.465655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:38.272 [2024-09-30 20:01:22.465663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:38.272 [2024-09-30 20:01:22.465670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.272 [2024-09-30 20:01:22.465770] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:38.272 [2024-09-30 20:01:22.465780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:38.272 [2024-09-30 20:01:22.465792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.272 [2024-09-30 20:01:22.465800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.272 [2024-09-30 20:01:22.465808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:38.272 [2024-09-30 20:01:22.465815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:38.272 [2024-09-30 20:01:22.465822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:38.272 [2024-09-30 20:01:22.465829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:38.272 [2024-09-30 20:01:22.465836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:38.272 [2024-09-30 20:01:22.465843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.272 [2024-09-30 20:01:22.465868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:38.272 [2024-09-30 20:01:22.465883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:38.272 [2024-09-30 20:01:22.465890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.272 [2024-09-30 20:01:22.465897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:38.272 [2024-09-30 20:01:22.465904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:38.272 [2024-09-30 20:01:22.465910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.272 [2024-09-30 20:01:22.465917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:38.273 [2024-09-30 20:01:22.465926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:38.273 [2024-09-30 20:01:22.465933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.273 [2024-09-30 20:01:22.465940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:38.273 [2024-09-30 20:01:22.465947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:38.273 [2024-09-30 20:01:22.465955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.273 [2024-09-30 20:01:22.465961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:38.273 [2024-09-30 20:01:22.465968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:38.273 [2024-09-30 20:01:22.465974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.273 [2024-09-30 20:01:22.465981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:38.273 [2024-09-30 20:01:22.465988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:38.273 [2024-09-30 20:01:22.465995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.273 [2024-09-30 20:01:22.466002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:38.273 [2024-09-30 20:01:22.466009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:38.273 [2024-09-30 20:01:22.466015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.273 [2024-09-30 20:01:22.466022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:38.273 [2024-09-30 20:01:22.466029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:38.273 [2024-09-30 20:01:22.466035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.273 [2024-09-30 20:01:22.466042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:38.273 [2024-09-30 20:01:22.466048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:38.273 [2024-09-30 20:01:22.466055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.273 [2024-09-30 20:01:22.466062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:38.273 [2024-09-30 20:01:22.466068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:38.273 [2024-09-30 20:01:22.466075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.273 [2024-09-30 20:01:22.466081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:38.273 [2024-09-30 20:01:22.466088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:38.273 [2024-09-30 20:01:22.466094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.273 [2024-09-30 20:01:22.466101] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:38.273 [2024-09-30 20:01:22.466108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:38.273 [2024-09-30 20:01:22.466117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.273 [2024-09-30 20:01:22.466124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.273 [2024-09-30 20:01:22.466132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:38.273 [2024-09-30 20:01:22.466138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:38.273 [2024-09-30 20:01:22.466147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:38.273 [2024-09-30 20:01:22.466154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:38.273 [2024-09-30 20:01:22.466160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:38.273 [2024-09-30 20:01:22.466167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:38.273 [2024-09-30 20:01:22.466175] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:38.273 [2024-09-30 20:01:22.466187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:38.273 [2024-09-30 20:01:22.466202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:38.273 [2024-09-30 20:01:22.466210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:38.273 [2024-09-30 20:01:22.466217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:38.273 [2024-09-30 20:01:22.466224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:38.273 [2024-09-30 20:01:22.466231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:38.273 [2024-09-30 20:01:22.466238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:38.273 [2024-09-30 20:01:22.466245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:38.273 [2024-09-30 20:01:22.466252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:38.273 [2024-09-30 20:01:22.466259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:38.273 [2024-09-30 20:01:22.466306] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:38.273 [2024-09-30 20:01:22.466314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:38.273 [2024-09-30 20:01:22.466330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:38.273 [2024-09-30 20:01:22.466338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:38.273 [2024-09-30 20:01:22.466345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:38.273 [2024-09-30 20:01:22.466353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.466363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:38.273 [2024-09-30 20:01:22.466370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:17:38.273 [2024-09-30 20:01:22.466377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.508018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.508063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.273 [2024-09-30 20:01:22.508077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.589 ms 00:17:38.273 [2024-09-30 20:01:22.508085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.508223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.508236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:38.273 [2024-09-30 20:01:22.508245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:38.273 [2024-09-30 20:01:22.508252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.540734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.540768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.273 [2024-09-30 20:01:22.540779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.447 ms 00:17:38.273 [2024-09-30 20:01:22.540787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.540869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.540880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.273 [2024-09-30 20:01:22.540889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:38.273 [2024-09-30 20:01:22.540897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.541318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.541333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.273 [2024-09-30 20:01:22.541343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:17:38.273 [2024-09-30 20:01:22.541351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.273 [2024-09-30 20:01:22.541488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.273 [2024-09-30 20:01:22.541498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.274 [2024-09-30 20:01:22.541506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:38.274 [2024-09-30 20:01:22.541514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.555311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.555339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.274 [2024-09-30 20:01:22.555349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.776 ms 00:17:38.274 [2024-09-30 20:01:22.555357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.568222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:38.274 [2024-09-30 20:01:22.568258] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:38.274 [2024-09-30 20:01:22.568278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.568287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:38.274 [2024-09-30 20:01:22.568296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.821 ms 00:17:38.274 [2024-09-30 20:01:22.568304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.592736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.592768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:38.274 [2024-09-30 20:01:22.592784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.364 ms 00:17:38.274 [2024-09-30 20:01:22.592791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.604193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.604222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:38.274 [2024-09-30 20:01:22.604232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.336 ms 00:17:38.274 [2024-09-30 20:01:22.604240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.615440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.615469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:38.274 [2024-09-30 20:01:22.615479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.133 ms 00:17:38.274 [2024-09-30 20:01:22.615487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.274 [2024-09-30 20:01:22.616086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.274 [2024-09-30 20:01:22.616111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:38.274 [2024-09-30 20:01:22.616121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:17:38.274 [2024-09-30 20:01:22.616129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.532 [2024-09-30 20:01:22.675524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.532 [2024-09-30 20:01:22.675584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:38.532 [2024-09-30 20:01:22.675599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.371 ms 00:17:38.532 [2024-09-30 20:01:22.675608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.532 [2024-09-30 20:01:22.686522] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:38.532 [2024-09-30 20:01:22.703183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.532 [2024-09-30 20:01:22.703226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:38.532 [2024-09-30 20:01:22.703240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.465 ms 00:17:38.532 [2024-09-30 20:01:22.703248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.532 [2024-09-30 20:01:22.703366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.532 [2024-09-30 20:01:22.703379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:38.532 [2024-09-30 20:01:22.703388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:38.532 [2024-09-30 20:01:22.703396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.532 [2024-09-30 20:01:22.703457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.532 [2024-09-30 20:01:22.703469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:38.532 [2024-09-30 20:01:22.703477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:38.532 [2024-09-30 20:01:22.703485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.703507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.703516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:38.533 [2024-09-30 20:01:22.703524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:38.533 [2024-09-30 20:01:22.703531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.703567] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:38.533 [2024-09-30 20:01:22.703577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.703588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:38.533 [2024-09-30 20:01:22.703596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:38.533 [2024-09-30 20:01:22.703604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.727590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.727628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:38.533 [2024-09-30 20:01:22.727638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.966 ms 00:17:38.533 [2024-09-30 20:01:22.727647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.727743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.727755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:38.533 [2024-09-30 20:01:22.727764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:38.533 [2024-09-30 20:01:22.727772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.728940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.533 [2024-09-30 20:01:22.731928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.161 ms, result 0 00:17:38.533 [2024-09-30 20:01:22.732823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.533 [2024-09-30 20:01:22.745887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.533  Copying: 4096/4096 [kB] (average 38 MBps)[2024-09-30 20:01:22.851717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.533 [2024-09-30 20:01:22.860174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.860207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.533 [2024-09-30 20:01:22.860218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:38.533 [2024-09-30 20:01:22.860226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.860247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.533 [2024-09-30 20:01:22.863057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.863087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.533 [2024-09-30 20:01:22.863098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:17:38.533 [2024-09-30 20:01:22.863105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.864763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.864798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.533 [2024-09-30 20:01:22.864808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:17:38.533 [2024-09-30 20:01:22.864816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.868829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.868854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.533 [2024-09-30 20:01:22.868863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:17:38.533 [2024-09-30 20:01:22.868871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.533 [2024-09-30 20:01:22.875902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.533 [2024-09-30 20:01:22.875929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.533 [2024-09-30 20:01:22.875943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.006 ms 00:17:38.533 [2024-09-30 20:01:22.875950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.898767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.898798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.792 [2024-09-30 20:01:22.898809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.750 ms 00:17:38.792 [2024-09-30 20:01:22.898817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.912975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.913007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.792 [2024-09-30 20:01:22.913019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.125 ms 00:17:38.792 [2024-09-30 20:01:22.913026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.913154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.913164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.792 [2024-09-30 20:01:22.913173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:38.792 [2024-09-30 20:01:22.913180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.936308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.936340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.792 [2024-09-30 20:01:22.936350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.108 ms 00:17:38.792 [2024-09-30 20:01:22.936357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.958684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.958713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.792 [2024-09-30 20:01:22.958722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.294 ms 00:17:38.792 [2024-09-30 20:01:22.958729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:22.980686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:22.980716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.792 [2024-09-30 20:01:22.980725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.925 ms 00:17:38.792 [2024-09-30 20:01:22.980732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:23.003042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.792 [2024-09-30 20:01:23.003071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.792 [2024-09-30 20:01:23.003081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:17:38.792 [2024-09-30 20:01:23.003087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.792 [2024-09-30 20:01:23.003119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.792 [2024-09-30 20:01:23.003133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.792 [2024-09-30 20:01:23.003144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.792 [2024-09-30 20:01:23.003151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.793 [2024-09-30 20:01:23.003763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.794 [2024-09-30 20:01:23.003923] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.794 [2024-09-30 20:01:23.003932] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:38.794 [2024-09-30 20:01:23.003939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.794 [2024-09-30 20:01:23.003947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.794 [2024-09-30 20:01:23.003956] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.794 [2024-09-30 20:01:23.003964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.794 [2024-09-30 20:01:23.003971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.794 [2024-09-30 20:01:23.003978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.794 [2024-09-30 20:01:23.003985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.794 [2024-09-30 20:01:23.003992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.794 [2024-09-30 20:01:23.003998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.794 [2024-09-30 20:01:23.004006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.794 [2024-09-30 20:01:23.004012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.794 [2024-09-30 20:01:23.004020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:17:38.794 [2024-09-30 20:01:23.004027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.016561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.794 [2024-09-30 20:01:23.016594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.794 [2024-09-30 20:01:23.016604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.517 ms 00:17:38.794 [2024-09-30 20:01:23.016612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.016971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.794 [2024-09-30 20:01:23.017003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.794 [2024-09-30 20:01:23.017012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:38.794 [2024-09-30 20:01:23.017020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.049014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.794 [2024-09-30 20:01:23.049049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.794 [2024-09-30 20:01:23.049060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.794 [2024-09-30 20:01:23.049070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.049143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.794 [2024-09-30 20:01:23.049152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.794 [2024-09-30 20:01:23.049162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.794 [2024-09-30 20:01:23.049171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.049212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.794 [2024-09-30 20:01:23.049226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.794 [2024-09-30 20:01:23.049236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.794 [2024-09-30 20:01:23.049245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.049264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.794 [2024-09-30 20:01:23.049289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.794 [2024-09-30 20:01:23.049299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.794 [2024-09-30 20:01:23.049308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.794 [2024-09-30 20:01:23.129539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.794 [2024-09-30 20:01:23.129594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.794 [2024-09-30 20:01:23.129608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.794 [2024-09-30 20:01:23.129616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.194814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.194870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.057 [2024-09-30 20:01:23.194884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.194892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.194955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.194965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.057 [2024-09-30 20:01:23.194978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.194986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.195017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.195026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.057 [2024-09-30 20:01:23.195034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.195042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.195137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.195148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.057 [2024-09-30 20:01:23.195156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.195167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.195198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.195207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.057 [2024-09-30 20:01:23.195215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.195223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.195261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.195283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.057 [2024-09-30 20:01:23.195292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.195303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.057 [2024-09-30 20:01:23.195348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.057 [2024-09-30 20:01:23.195357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.057 [2024-09-30 20:01:23.195365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.057 [2024-09-30 20:01:23.195373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.058 [2024-09-30 20:01:23.195522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.314 ms, result 0 00:17:39.670 00:17:39.670 00:17:39.938 20:01:24 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74372 00:17:39.938 20:01:24 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74372 00:17:39.938 20:01:24 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74372 ']' 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.938 20:01:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:39.938 [2024-09-30 20:01:24.128615] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:39.938 [2024-09-30 20:01:24.128747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74372 ] 00:17:39.938 [2024-09-30 20:01:24.282395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.198 [2024-09-30 20:01:24.479141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.768 20:01:25 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.768 20:01:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:40.768 20:01:25 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:41.029 [2024-09-30 20:01:25.314933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.029 [2024-09-30 20:01:25.314999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.292 [2024-09-30 20:01:25.487475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.487517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.292 [2024-09-30 20:01:25.487533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.292 [2024-09-30 20:01:25.487545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.490305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.490335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.292 [2024-09-30 20:01:25.490348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.741 ms 00:17:41.292 [2024-09-30 20:01:25.490356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.490493] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.292 [2024-09-30 20:01:25.491172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.292 [2024-09-30 20:01:25.491198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.491206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.292 [2024-09-30 20:01:25.491217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:17:41.292 [2024-09-30 20:01:25.491225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.492631] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.292 [2024-09-30 20:01:25.505975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.506010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.292 [2024-09-30 20:01:25.506022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.348 ms 00:17:41.292 [2024-09-30 20:01:25.506033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.506112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.506127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.292 [2024-09-30 20:01:25.506135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:41.292 [2024-09-30 20:01:25.506144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.512574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.512610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.292 [2024-09-30 20:01:25.512619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.384 ms 00:17:41.292 [2024-09-30 20:01:25.512628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.512722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.512734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.292 [2024-09-30 20:01:25.512742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:41.292 [2024-09-30 20:01:25.512752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.512775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.512785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.292 [2024-09-30 20:01:25.512793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.292 [2024-09-30 20:01:25.512803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.512824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:41.292 [2024-09-30 20:01:25.516330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.516355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.292 [2024-09-30 20:01:25.516365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:17:41.292 [2024-09-30 20:01:25.516374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.516422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.516432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.292 [2024-09-30 20:01:25.516442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.292 [2024-09-30 20:01:25.516449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.516470] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.292 [2024-09-30 20:01:25.516488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.292 [2024-09-30 20:01:25.516530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.292 [2024-09-30 20:01:25.516548] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.292 [2024-09-30 20:01:25.516656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.292 [2024-09-30 20:01:25.516667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.292 [2024-09-30 20:01:25.516679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.292 [2024-09-30 20:01:25.516689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.292 [2024-09-30 20:01:25.516700] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.292 [2024-09-30 20:01:25.516708] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:41.292 [2024-09-30 20:01:25.516718] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.292 [2024-09-30 20:01:25.516725] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.292 [2024-09-30 20:01:25.516735] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.292 [2024-09-30 20:01:25.516745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.516754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.292 [2024-09-30 20:01:25.516762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:41.292 [2024-09-30 20:01:25.516770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.516867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.292 [2024-09-30 20:01:25.516878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.292 [2024-09-30 20:01:25.516885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:41.292 [2024-09-30 20:01:25.516894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.292 [2024-09-30 20:01:25.516995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.292 [2024-09-30 20:01:25.517008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.292 [2024-09-30 20:01:25.517017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.292 [2024-09-30 20:01:25.517042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.292 [2024-09-30 20:01:25.517068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.292 [2024-09-30 20:01:25.517083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.292 [2024-09-30 20:01:25.517091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:41.292 [2024-09-30 20:01:25.517097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.292 [2024-09-30 20:01:25.517105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.292 [2024-09-30 20:01:25.517114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:41.292 [2024-09-30 20:01:25.517123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.292 [2024-09-30 20:01:25.517138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.292 [2024-09-30 20:01:25.517165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.292 [2024-09-30 20:01:25.517211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.292 [2024-09-30 20:01:25.517232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.292 [2024-09-30 20:01:25.517254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.292 [2024-09-30 20:01:25.517287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.292 [2024-09-30 20:01:25.517294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:41.292 [2024-09-30 20:01:25.517303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.292 [2024-09-30 20:01:25.517310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.292 [2024-09-30 20:01:25.517318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:41.293 [2024-09-30 20:01:25.517325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.293 [2024-09-30 20:01:25.517333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.293 [2024-09-30 20:01:25.517340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:41.293 [2024-09-30 20:01:25.517350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.293 [2024-09-30 20:01:25.517356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.293 [2024-09-30 20:01:25.517364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:41.293 [2024-09-30 20:01:25.517378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.293 [2024-09-30 20:01:25.517387] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.293 [2024-09-30 20:01:25.517395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.293 [2024-09-30 20:01:25.517403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.293 [2024-09-30 20:01:25.517411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.293 [2024-09-30 20:01:25.517421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.293 [2024-09-30 20:01:25.517428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.293 [2024-09-30 20:01:25.517436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.293 [2024-09-30 20:01:25.517443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.293 [2024-09-30 20:01:25.517451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.293 [2024-09-30 20:01:25.517457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.293 [2024-09-30 20:01:25.517467] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.293 [2024-09-30 20:01:25.517477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:41.293 [2024-09-30 20:01:25.517496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:41.293 [2024-09-30 20:01:25.517506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:41.293 [2024-09-30 20:01:25.517513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:41.293 [2024-09-30 20:01:25.517523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:41.293 [2024-09-30 20:01:25.517530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:41.293 [2024-09-30 20:01:25.517538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:41.293 [2024-09-30 20:01:25.517545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:41.293 [2024-09-30 20:01:25.517554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:41.293 [2024-09-30 20:01:25.517561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:41.293 [2024-09-30 20:01:25.517601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.293 [2024-09-30 20:01:25.517609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.293 [2024-09-30 20:01:25.517629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.293 [2024-09-30 20:01:25.517638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.293 [2024-09-30 20:01:25.517645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.293 [2024-09-30 20:01:25.517654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.517661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.293 [2024-09-30 20:01:25.517670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:17:41.293 [2024-09-30 20:01:25.517678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.546571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.546604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.293 [2024-09-30 20:01:25.546617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.831 ms 00:17:41.293 [2024-09-30 20:01:25.546625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.546743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.546752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.293 [2024-09-30 20:01:25.546762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:41.293 [2024-09-30 20:01:25.546770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.585956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.585995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.293 [2024-09-30 20:01:25.586010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.162 ms 00:17:41.293 [2024-09-30 20:01:25.586018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.586107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.586118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.293 [2024-09-30 20:01:25.586131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.293 [2024-09-30 20:01:25.586139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.586598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.586614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.293 [2024-09-30 20:01:25.586626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:41.293 [2024-09-30 20:01:25.586634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.586769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.586778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.293 [2024-09-30 20:01:25.586788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:41.293 [2024-09-30 20:01:25.586798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.601787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.601935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.293 [2024-09-30 20:01:25.601954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.966 ms 00:17:41.293 [2024-09-30 20:01:25.601965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.614481] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:41.293 [2024-09-30 20:01:25.614514] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.293 [2024-09-30 20:01:25.614529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.614538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.293 [2024-09-30 20:01:25.614548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:17:41.293 [2024-09-30 20:01:25.614555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.638911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.638950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.293 [2024-09-30 20:01:25.638962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.285 ms 00:17:41.293 [2024-09-30 20:01:25.638975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.293 [2024-09-30 20:01:25.651127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.293 [2024-09-30 20:01:25.651158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.293 [2024-09-30 20:01:25.651172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.083 ms 00:17:41.293 [2024-09-30 20:01:25.651179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.555 [2024-09-30 20:01:25.663236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.555 [2024-09-30 20:01:25.663264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.555 [2024-09-30 20:01:25.663289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.996 ms 00:17:41.555 [2024-09-30 20:01:25.663296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.555 [2024-09-30 20:01:25.663887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.555 [2024-09-30 20:01:25.663912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.555 [2024-09-30 20:01:25.663924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:17:41.555 [2024-09-30 20:01:25.663934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.722601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.722647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.556 [2024-09-30 20:01:25.722662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.642 ms 00:17:41.556 [2024-09-30 20:01:25.722673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.733403] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.556 [2024-09-30 20:01:25.749724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.749768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.556 [2024-09-30 20:01:25.749778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.958 ms 00:17:41.556 [2024-09-30 20:01:25.749788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.749880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.749893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.556 [2024-09-30 20:01:25.749902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:41.556 [2024-09-30 20:01:25.749911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.749964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.749975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.556 [2024-09-30 20:01:25.749983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:41.556 [2024-09-30 20:01:25.749992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.750017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.750027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.556 [2024-09-30 20:01:25.750038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.556 [2024-09-30 20:01:25.750050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.750083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.556 [2024-09-30 20:01:25.750097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.750105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.556 [2024-09-30 20:01:25.750114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:41.556 [2024-09-30 20:01:25.750122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.773756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.773792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.556 [2024-09-30 20:01:25.773806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.610 ms 00:17:41.556 [2024-09-30 20:01:25.773816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.773921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.556 [2024-09-30 20:01:25.773932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.556 [2024-09-30 20:01:25.773943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:41.556 [2024-09-30 20:01:25.773951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.556 [2024-09-30 20:01:25.774844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.556 [2024-09-30 20:01:25.777721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.055 ms, result 0 00:17:41.556 [2024-09-30 20:01:25.779247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.556 Some configs were skipped because the RPC state that can call them passed over. 00:17:41.556 20:01:25 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:41.816 [2024-09-30 20:01:26.012236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.816 [2024-09-30 20:01:26.012300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:41.816 [2024-09-30 20:01:26.012313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:17:41.816 [2024-09-30 20:01:26.012324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.816 [2024-09-30 20:01:26.012356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.634 ms, result 0 00:17:41.816 true 00:17:41.816 20:01:26 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:42.074 [2024-09-30 20:01:26.227590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.074 [2024-09-30 20:01:26.227624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:42.074 [2024-09-30 20:01:26.227635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:17:42.074 [2024-09-30 20:01:26.227641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.074 [2024-09-30 20:01:26.227668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.097 ms, result 0 00:17:42.074 true 00:17:42.074 20:01:26 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74372 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74372 ']' 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74372 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74372 00:17:42.074 killing process with pid 74372 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74372' 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74372 00:17:42.074 20:01:26 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74372 00:17:42.643 [2024-09-30 20:01:26.830626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.830685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.643 [2024-09-30 20:01:26.830696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.643 [2024-09-30 20:01:26.830704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.830724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.643 [2024-09-30 20:01:26.832862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.832887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.643 [2024-09-30 20:01:26.832900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:17:42.643 [2024-09-30 20:01:26.832906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.833140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.833154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.643 [2024-09-30 20:01:26.833165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:42.643 [2024-09-30 20:01:26.833171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.836488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.836514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.643 [2024-09-30 20:01:26.836523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.300 ms 00:17:42.643 [2024-09-30 20:01:26.836529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.841828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.841972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.643 [2024-09-30 20:01:26.841991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.270 ms 00:17:42.643 [2024-09-30 20:01:26.841999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.849816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.849905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.643 [2024-09-30 20:01:26.849984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.768 ms 00:17:42.643 [2024-09-30 20:01:26.850003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.856562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.856656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.643 [2024-09-30 20:01:26.856729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.511 ms 00:17:42.643 [2024-09-30 20:01:26.856752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.856870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.856890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.643 [2024-09-30 20:01:26.856909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:42.643 [2024-09-30 20:01:26.856957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.864786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.864871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.643 [2024-09-30 20:01:26.864921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.796 ms 00:17:42.643 [2024-09-30 20:01:26.864937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.872574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.872654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.643 [2024-09-30 20:01:26.872697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.601 ms 00:17:42.643 [2024-09-30 20:01:26.872714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.879795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.879873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.643 [2024-09-30 20:01:26.879913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:17:42.643 [2024-09-30 20:01:26.879929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.887071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.643 [2024-09-30 20:01:26.887151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.643 [2024-09-30 20:01:26.887214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.084 ms 00:17:42.643 [2024-09-30 20:01:26.887231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.643 [2024-09-30 20:01:26.887283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.643 [2024-09-30 20:01:26.887307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.643 [2024-09-30 20:01:26.887672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.887995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.888957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.889589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.890982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.644 [2024-09-30 20:01:26.891402] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.645 [2024-09-30 20:01:26.891434] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:42.645 [2024-09-30 20:01:26.891455] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.645 [2024-09-30 20:01:26.891479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.645 [2024-09-30 20:01:26.891498] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.645 [2024-09-30 20:01:26.891522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.645 [2024-09-30 20:01:26.891554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.645 [2024-09-30 20:01:26.891578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.645 [2024-09-30 20:01:26.891601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.645 [2024-09-30 20:01:26.891622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.645 [2024-09-30 20:01:26.891639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.645 [2024-09-30 20:01:26.891666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.645 [2024-09-30 20:01:26.891702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.645 [2024-09-30 20:01:26.891731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:17:42.645 [2024-09-30 20:01:26.891751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.908491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.645 [2024-09-30 20:01:26.908608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.645 [2024-09-30 20:01:26.908629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.581 ms 00:17:42.645 [2024-09-30 20:01:26.908637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.909029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.645 [2024-09-30 20:01:26.909040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.645 [2024-09-30 20:01:26.909051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:17:42.645 [2024-09-30 20:01:26.909059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.950301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.645 [2024-09-30 20:01:26.950338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.645 [2024-09-30 20:01:26.950350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.645 [2024-09-30 20:01:26.950362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.950475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.645 [2024-09-30 20:01:26.950486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.645 [2024-09-30 20:01:26.950495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.645 [2024-09-30 20:01:26.950503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.950548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.645 [2024-09-30 20:01:26.950558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.645 [2024-09-30 20:01:26.950570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.645 [2024-09-30 20:01:26.950577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.645 [2024-09-30 20:01:26.950600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.645 [2024-09-30 20:01:26.950607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.645 [2024-09-30 20:01:26.950617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.645 [2024-09-30 20:01:26.950624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.903 [2024-09-30 20:01:27.030346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.903 [2024-09-30 20:01:27.030404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.904 [2024-09-30 20:01:27.030419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.030429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.094825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.094879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.904 [2024-09-30 20:01:27.094892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.094901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.904 [2024-09-30 20:01:27.096430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.904 [2024-09-30 20:01:27.096502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.904 [2024-09-30 20:01:27.096634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.904 [2024-09-30 20:01:27.096697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.904 [2024-09-30 20:01:27.096767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.904 [2024-09-30 20:01:27.096836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.904 [2024-09-30 20:01:27.096845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.904 [2024-09-30 20:01:27.096853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.904 [2024-09-30 20:01:27.096998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.346 ms, result 0 00:17:43.839 20:01:27 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.839 [2024-09-30 20:01:27.954582] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:43.839 [2024-09-30 20:01:27.954704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74429 ] 00:17:43.839 [2024-09-30 20:01:28.101707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.097 [2024-09-30 20:01:28.279108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.356 [2024-09-30 20:01:28.509785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.356 [2024-09-30 20:01:28.509851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.356 [2024-09-30 20:01:28.663571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.663619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.356 [2024-09-30 20:01:28.663634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.356 [2024-09-30 20:01:28.663641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.665902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.666076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.356 [2024-09-30 20:01:28.666091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:17:44.356 [2024-09-30 20:01:28.666101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.666241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.356 [2024-09-30 20:01:28.666805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.356 [2024-09-30 20:01:28.666829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.666838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.356 [2024-09-30 20:01:28.666845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:17:44.356 [2024-09-30 20:01:28.666852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.668180] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.356 [2024-09-30 20:01:28.678507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.678536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.356 [2024-09-30 20:01:28.678546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.328 ms 00:17:44.356 [2024-09-30 20:01:28.678552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.678633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.678642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.356 [2024-09-30 20:01:28.678651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:44.356 [2024-09-30 20:01:28.678657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.685050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.685206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.356 [2024-09-30 20:01:28.685219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.359 ms 00:17:44.356 [2024-09-30 20:01:28.685227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.685320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.685332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.356 [2024-09-30 20:01:28.685340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:44.356 [2024-09-30 20:01:28.685346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.685368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.685376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.356 [2024-09-30 20:01:28.685382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.356 [2024-09-30 20:01:28.685389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.685406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.356 [2024-09-30 20:01:28.688444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.688468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.356 [2024-09-30 20:01:28.688476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:17:44.356 [2024-09-30 20:01:28.688482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.356 [2024-09-30 20:01:28.688514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.356 [2024-09-30 20:01:28.688524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.357 [2024-09-30 20:01:28.688531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.357 [2024-09-30 20:01:28.688537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.357 [2024-09-30 20:01:28.688552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.357 [2024-09-30 20:01:28.688569] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.357 [2024-09-30 20:01:28.688597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.357 [2024-09-30 20:01:28.688609] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.357 [2024-09-30 20:01:28.688695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.357 [2024-09-30 20:01:28.688703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.357 [2024-09-30 20:01:28.688712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.357 [2024-09-30 20:01:28.688720] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.357 [2024-09-30 20:01:28.688728] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.357 [2024-09-30 20:01:28.688735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.357 [2024-09-30 20:01:28.688741] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.357 [2024-09-30 20:01:28.688746] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.357 [2024-09-30 20:01:28.688752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.357 [2024-09-30 20:01:28.688758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.357 [2024-09-30 20:01:28.688766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.357 [2024-09-30 20:01:28.688772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:44.357 [2024-09-30 20:01:28.688778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.357 [2024-09-30 20:01:28.688857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.357 [2024-09-30 20:01:28.688865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.357 [2024-09-30 20:01:28.688871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:44.357 [2024-09-30 20:01:28.688877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.357 [2024-09-30 20:01:28.688955] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.357 [2024-09-30 20:01:28.688963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.357 [2024-09-30 20:01:28.688971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.357 [2024-09-30 20:01:28.688977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.688984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.357 [2024-09-30 20:01:28.688989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.688995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.357 [2024-09-30 20:01:28.689007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.357 [2024-09-30 20:01:28.689018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.357 [2024-09-30 20:01:28.689029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.357 [2024-09-30 20:01:28.689034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.357 [2024-09-30 20:01:28.689040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.357 [2024-09-30 20:01:28.689045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.357 [2024-09-30 20:01:28.689050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.357 [2024-09-30 20:01:28.689062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.357 [2024-09-30 20:01:28.689078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.357 [2024-09-30 20:01:28.689093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.357 [2024-09-30 20:01:28.689109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.357 [2024-09-30 20:01:28.689124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.357 [2024-09-30 20:01:28.689139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.357 [2024-09-30 20:01:28.689149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.357 [2024-09-30 20:01:28.689155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.357 [2024-09-30 20:01:28.689160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.357 [2024-09-30 20:01:28.689165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.357 [2024-09-30 20:01:28.689170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.357 [2024-09-30 20:01:28.689175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.357 [2024-09-30 20:01:28.689186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.357 [2024-09-30 20:01:28.689191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689197] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.357 [2024-09-30 20:01:28.689203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.357 [2024-09-30 20:01:28.689209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.357 [2024-09-30 20:01:28.689221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.357 [2024-09-30 20:01:28.689226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.357 [2024-09-30 20:01:28.689234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.357 [2024-09-30 20:01:28.689240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.357 [2024-09-30 20:01:28.689246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.357 [2024-09-30 20:01:28.689251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.357 [2024-09-30 20:01:28.689258] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.357 [2024-09-30 20:01:28.689283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.357 [2024-09-30 20:01:28.689297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.357 [2024-09-30 20:01:28.689303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.357 [2024-09-30 20:01:28.689309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.357 [2024-09-30 20:01:28.689314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.357 [2024-09-30 20:01:28.689320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.357 [2024-09-30 20:01:28.689326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.357 [2024-09-30 20:01:28.689332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.357 [2024-09-30 20:01:28.689338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.357 [2024-09-30 20:01:28.689352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.357 [2024-09-30 20:01:28.689382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.357 [2024-09-30 20:01:28.689388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.357 [2024-09-30 20:01:28.689400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.357 [2024-09-30 20:01:28.689406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.357 [2024-09-30 20:01:28.689411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.357 [2024-09-30 20:01:28.689417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.358 [2024-09-30 20:01:28.689425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.358 [2024-09-30 20:01:28.689430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:44.358 [2024-09-30 20:01:28.689436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.726425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.726616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.616 [2024-09-30 20:01:28.726639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.933 ms 00:17:44.616 [2024-09-30 20:01:28.726650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.726818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.726833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.616 [2024-09-30 20:01:28.726844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:44.616 [2024-09-30 20:01:28.726853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.753596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.753726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.616 [2024-09-30 20:01:28.753741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.717 ms 00:17:44.616 [2024-09-30 20:01:28.753748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.753806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.753814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.616 [2024-09-30 20:01:28.753821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.616 [2024-09-30 20:01:28.753828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.754225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.754238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.616 [2024-09-30 20:01:28.754246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:44.616 [2024-09-30 20:01:28.754253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.754393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.754402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.616 [2024-09-30 20:01:28.754410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:44.616 [2024-09-30 20:01:28.754417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.765986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.766012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.616 [2024-09-30 20:01:28.766021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.552 ms 00:17:44.616 [2024-09-30 20:01:28.766027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.776347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.616 [2024-09-30 20:01:28.776475] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.616 [2024-09-30 20:01:28.776487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.776495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.616 [2024-09-30 20:01:28.776502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.357 ms 00:17:44.616 [2024-09-30 20:01:28.776509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.795673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.795774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.616 [2024-09-30 20:01:28.795793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.107 ms 00:17:44.616 [2024-09-30 20:01:28.795800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.804673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.804703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.616 [2024-09-30 20:01:28.804712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.817 ms 00:17:44.616 [2024-09-30 20:01:28.804718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.813392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.813417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.616 [2024-09-30 20:01:28.813426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.631 ms 00:17:44.616 [2024-09-30 20:01:28.813432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.813937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.813959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.616 [2024-09-30 20:01:28.813967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:17:44.616 [2024-09-30 20:01:28.813974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.862162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.862211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.616 [2024-09-30 20:01:28.862223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.167 ms 00:17:44.616 [2024-09-30 20:01:28.862230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.870840] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.616 [2024-09-30 20:01:28.885958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.886111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.616 [2024-09-30 20:01:28.886126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.543 ms 00:17:44.616 [2024-09-30 20:01:28.886133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.886250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.886260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.616 [2024-09-30 20:01:28.886288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.616 [2024-09-30 20:01:28.886295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.886348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.886358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.616 [2024-09-30 20:01:28.886365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:44.616 [2024-09-30 20:01:28.886371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.886390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.886397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.616 [2024-09-30 20:01:28.886403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.616 [2024-09-30 20:01:28.886410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.886440] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.616 [2024-09-30 20:01:28.886448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.886456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.616 [2024-09-30 20:01:28.886463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.616 [2024-09-30 20:01:28.886469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.905107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.905207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.616 [2024-09-30 20:01:28.905221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.620 ms 00:17:44.616 [2024-09-30 20:01:28.905228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.616 [2024-09-30 20:01:28.905321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.616 [2024-09-30 20:01:28.905331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.616 [2024-09-30 20:01:28.905338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.616 [2024-09-30 20:01:28.905344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.617 [2024-09-30 20:01:28.906183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.617 [2024-09-30 20:01:28.908548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 242.346 ms, result 0 00:17:44.617 [2024-09-30 20:01:28.909240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.617 [2024-09-30 20:01:28.924044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.920  Copying: 42/256 [MB] (42 MBps) Copying: 85/256 [MB] (43 MBps) Copying: 127/256 [MB] (42 MBps) Copying: 171/256 [MB] (43 MBps) Copying: 219/256 [MB] (47 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-09-30 20:01:35.103803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.920 [2024-09-30 20:01:35.116002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.116046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:50.920 [2024-09-30 20:01:35.116061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:50.920 [2024-09-30 20:01:35.116070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.116094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:50.920 [2024-09-30 20:01:35.119546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.119578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:50.920 [2024-09-30 20:01:35.119589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:17:50.920 [2024-09-30 20:01:35.119597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.119971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.119993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:50.920 [2024-09-30 20:01:35.120002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:50.920 [2024-09-30 20:01:35.120010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.123702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.123722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:50.920 [2024-09-30 20:01:35.123733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:17:50.920 [2024-09-30 20:01:35.123742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.130701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.131191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:50.920 [2024-09-30 20:01:35.131215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.940 ms 00:17:50.920 [2024-09-30 20:01:35.131224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.154365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.154400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:50.920 [2024-09-30 20:01:35.154411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.057 ms 00:17:50.920 [2024-09-30 20:01:35.154419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.168759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.168795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:50.920 [2024-09-30 20:01:35.168807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.304 ms 00:17:50.920 [2024-09-30 20:01:35.168815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.168957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.168969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:50.920 [2024-09-30 20:01:35.168978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:50.920 [2024-09-30 20:01:35.168986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.192556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.192588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:50.920 [2024-09-30 20:01:35.192599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.548 ms 00:17:50.920 [2024-09-30 20:01:35.192606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.920 [2024-09-30 20:01:35.215139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.920 [2024-09-30 20:01:35.215184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:50.920 [2024-09-30 20:01:35.215195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:17:50.920 [2024-09-30 20:01:35.215203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.921 [2024-09-30 20:01:35.237687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.921 [2024-09-30 20:01:35.237837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:50.921 [2024-09-30 20:01:35.237855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:17:50.921 [2024-09-30 20:01:35.237862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.921 [2024-09-30 20:01:35.259756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.921 [2024-09-30 20:01:35.259786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:50.921 [2024-09-30 20:01:35.259796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.817 ms 00:17:50.921 [2024-09-30 20:01:35.259805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.921 [2024-09-30 20:01:35.259838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:50.921 [2024-09-30 20:01:35.259855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.259999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:50.921 [2024-09-30 20:01:35.260416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:50.922 [2024-09-30 20:01:35.260683] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:50.922 [2024-09-30 20:01:35.260691] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f28407e-e46f-4786-a195-419638626b81 00:17:50.922 [2024-09-30 20:01:35.260700] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:50.922 [2024-09-30 20:01:35.260708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:50.922 [2024-09-30 20:01:35.260715] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:50.922 [2024-09-30 20:01:35.260726] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:50.922 [2024-09-30 20:01:35.260733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:50.922 [2024-09-30 20:01:35.260741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:50.922 [2024-09-30 20:01:35.260748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:50.922 [2024-09-30 20:01:35.260755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:50.922 [2024-09-30 20:01:35.260761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:50.922 [2024-09-30 20:01:35.260768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.922 [2024-09-30 20:01:35.260775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:50.922 [2024-09-30 20:01:35.260784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:17:50.922 [2024-09-30 20:01:35.260792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.922 [2024-09-30 20:01:35.273818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.922 [2024-09-30 20:01:35.273853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:50.922 [2024-09-30 20:01:35.273863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.010 ms 00:17:50.922 [2024-09-30 20:01:35.273885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.922 [2024-09-30 20:01:35.274259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.922 [2024-09-30 20:01:35.274289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:50.922 [2024-09-30 20:01:35.274299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:17:50.922 [2024-09-30 20:01:35.274307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.306221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.306256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.181 [2024-09-30 20:01:35.306283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.306292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.306375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.306384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.181 [2024-09-30 20:01:35.306393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.306401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.306461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.306475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.181 [2024-09-30 20:01:35.306484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.306492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.306510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.306518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.181 [2024-09-30 20:01:35.306526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.306533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.386491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.386548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.181 [2024-09-30 20:01:35.386561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.386569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.451980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.181 [2024-09-30 20:01:35.452227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.181 [2024-09-30 20:01:35.452371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.181 [2024-09-30 20:01:35.452427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.181 [2024-09-30 20:01:35.452553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:51.181 [2024-09-30 20:01:35.452614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.181 [2024-09-30 20:01:35.452680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.181 [2024-09-30 20:01:35.452746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.181 [2024-09-30 20:01:35.452754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.181 [2024-09-30 20:01:35.452761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.181 [2024-09-30 20:01:35.452915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.908 ms, result 0 00:17:52.118 00:17:52.118 00:17:52.118 20:01:36 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:52.684 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:52.684 Process with pid 74372 is not found 00:17:52.684 20:01:36 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74372 00:17:52.684 20:01:36 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74372 ']' 00:17:52.684 20:01:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74372 00:17:52.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74372) - No such process 00:17:52.684 20:01:36 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74372 is not found' 00:17:52.684 ************************************ 00:17:52.684 END TEST ftl_trim 00:17:52.684 ************************************ 00:17:52.684 00:17:52.684 real 0m51.247s 00:17:52.684 user 1m12.429s 00:17:52.684 sys 0m5.521s 00:17:52.684 20:01:36 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.684 20:01:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 20:01:36 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:52.684 20:01:36 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:52.684 20:01:36 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.684 20:01:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 ************************************ 00:17:52.684 START TEST ftl_restore 00:17:52.684 ************************************ 00:17:52.684 20:01:36 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:52.684 * Looking for test storage... 00:17:52.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.684 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.684 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.684 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.943 20:01:37 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.943 --rc genhtml_branch_coverage=1 00:17:52.943 --rc genhtml_function_coverage=1 00:17:52.943 --rc genhtml_legend=1 00:17:52.943 --rc geninfo_all_blocks=1 00:17:52.943 --rc geninfo_unexecuted_blocks=1 00:17:52.943 00:17:52.943 ' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.73P4rAMc02 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74590 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74590 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74590 ']' 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.943 20:01:37 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.943 20:01:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:52.943 [2024-09-30 20:01:37.212535] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:52.943 [2024-09-30 20:01:37.212826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74590 ] 00:17:53.202 [2024-09-30 20:01:37.360045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.462 [2024-09-30 20:01:37.572186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.033 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.033 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:54.033 20:01:38 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:54.293 20:01:38 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:54.293 20:01:38 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:54.293 20:01:38 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:54.293 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:54.293 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:54.293 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:54.293 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:54.293 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:54.554 { 00:17:54.554 "name": "nvme0n1", 00:17:54.554 "aliases": [ 00:17:54.554 "e4bdc84c-14ec-40f1-af72-98cdfb747cb6" 00:17:54.554 ], 00:17:54.554 "product_name": "NVMe disk", 00:17:54.554 "block_size": 4096, 00:17:54.554 "num_blocks": 1310720, 00:17:54.554 "uuid": "e4bdc84c-14ec-40f1-af72-98cdfb747cb6", 00:17:54.554 "numa_id": -1, 00:17:54.554 "assigned_rate_limits": { 00:17:54.554 "rw_ios_per_sec": 0, 00:17:54.554 "rw_mbytes_per_sec": 0, 00:17:54.554 "r_mbytes_per_sec": 0, 00:17:54.554 "w_mbytes_per_sec": 0 00:17:54.554 }, 00:17:54.554 "claimed": true, 00:17:54.554 "claim_type": "read_many_write_one", 00:17:54.554 "zoned": false, 00:17:54.554 "supported_io_types": { 00:17:54.554 "read": true, 00:17:54.554 "write": true, 00:17:54.554 "unmap": true, 00:17:54.554 "flush": true, 00:17:54.554 "reset": true, 00:17:54.554 "nvme_admin": true, 00:17:54.554 "nvme_io": true, 00:17:54.554 "nvme_io_md": false, 00:17:54.554 "write_zeroes": true, 00:17:54.554 "zcopy": false, 00:17:54.554 "get_zone_info": false, 00:17:54.554 "zone_management": false, 00:17:54.554 "zone_append": false, 00:17:54.554 "compare": true, 00:17:54.554 "compare_and_write": false, 00:17:54.554 "abort": true, 00:17:54.554 "seek_hole": false, 00:17:54.554 "seek_data": false, 00:17:54.554 "copy": true, 00:17:54.554 "nvme_iov_md": false 00:17:54.554 }, 00:17:54.554 "driver_specific": { 00:17:54.554 "nvme": [ 00:17:54.554 { 00:17:54.554 "pci_address": "0000:00:11.0", 00:17:54.554 "trid": { 00:17:54.554 "trtype": "PCIe", 00:17:54.554 "traddr": "0000:00:11.0" 00:17:54.554 }, 00:17:54.554 "ctrlr_data": { 00:17:54.554 "cntlid": 0, 00:17:54.554 "vendor_id": "0x1b36", 00:17:54.554 "model_number": "QEMU NVMe Ctrl", 00:17:54.554 "serial_number": "12341", 00:17:54.554 "firmware_revision": "8.0.0", 00:17:54.554 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:54.554 "oacs": { 00:17:54.554 "security": 0, 00:17:54.554 "format": 1, 00:17:54.554 "firmware": 0, 00:17:54.554 "ns_manage": 1 00:17:54.554 }, 00:17:54.554 "multi_ctrlr": false, 00:17:54.554 "ana_reporting": false 00:17:54.554 }, 00:17:54.554 "vs": { 00:17:54.554 "nvme_version": "1.4" 00:17:54.554 }, 00:17:54.554 "ns_data": { 00:17:54.554 "id": 1, 00:17:54.554 "can_share": false 00:17:54.554 } 00:17:54.554 } 00:17:54.554 ], 00:17:54.554 "mp_policy": "active_passive" 00:17:54.554 } 00:17:54.554 } 00:17:54.554 ]' 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:54.554 20:01:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:54.554 20:01:38 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:54.554 20:01:38 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:54.554 20:01:38 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:54.554 20:01:38 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:54.554 20:01:38 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.813 20:01:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e3b9884d-d856-432f-b114-b7f48fa29dae 00:17:54.813 20:01:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:54.813 20:01:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3b9884d-d856-432f-b114-b7f48fa29dae 00:17:55.070 20:01:39 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d839b0f1-d379-45d7-9ef8-0b19af87f959 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d839b0f1-d379-45d7-9ef8-0b19af87f959 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:55.328 20:01:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.328 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.328 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.328 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:55.328 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:55.328 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.585 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.585 { 00:17:55.585 "name": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:55.585 "aliases": [ 00:17:55.585 "lvs/nvme0n1p0" 00:17:55.585 ], 00:17:55.585 "product_name": "Logical Volume", 00:17:55.585 "block_size": 4096, 00:17:55.585 "num_blocks": 26476544, 00:17:55.585 "uuid": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:55.585 "assigned_rate_limits": { 00:17:55.585 "rw_ios_per_sec": 0, 00:17:55.585 "rw_mbytes_per_sec": 0, 00:17:55.585 "r_mbytes_per_sec": 0, 00:17:55.585 "w_mbytes_per_sec": 0 00:17:55.585 }, 00:17:55.585 "claimed": false, 00:17:55.585 "zoned": false, 00:17:55.585 "supported_io_types": { 00:17:55.585 "read": true, 00:17:55.585 "write": true, 00:17:55.585 "unmap": true, 00:17:55.585 "flush": false, 00:17:55.585 "reset": true, 00:17:55.585 "nvme_admin": false, 00:17:55.585 "nvme_io": false, 00:17:55.585 "nvme_io_md": false, 00:17:55.585 "write_zeroes": true, 00:17:55.585 "zcopy": false, 00:17:55.585 "get_zone_info": false, 00:17:55.585 "zone_management": false, 00:17:55.585 "zone_append": false, 00:17:55.585 "compare": false, 00:17:55.585 "compare_and_write": false, 00:17:55.585 "abort": false, 00:17:55.585 "seek_hole": true, 00:17:55.585 "seek_data": true, 00:17:55.585 "copy": false, 00:17:55.585 "nvme_iov_md": false 00:17:55.585 }, 00:17:55.585 "driver_specific": { 00:17:55.585 "lvol": { 00:17:55.585 "lvol_store_uuid": "d839b0f1-d379-45d7-9ef8-0b19af87f959", 00:17:55.585 "base_bdev": "nvme0n1", 00:17:55.585 "thin_provision": true, 00:17:55.585 "num_allocated_clusters": 0, 00:17:55.585 "snapshot": false, 00:17:55.585 "clone": false, 00:17:55.585 "esnap_clone": false 00:17:55.585 } 00:17:55.585 } 00:17:55.585 } 00:17:55.585 ]' 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:55.586 20:01:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:55.586 20:01:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:55.586 20:01:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:55.586 20:01:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:55.843 20:01:40 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:55.843 20:01:40 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:55.843 20:01:40 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.843 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:55.843 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.843 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:55.843 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:55.843 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:56.100 { 00:17:56.100 "name": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:56.100 "aliases": [ 00:17:56.100 "lvs/nvme0n1p0" 00:17:56.100 ], 00:17:56.100 "product_name": "Logical Volume", 00:17:56.100 "block_size": 4096, 00:17:56.100 "num_blocks": 26476544, 00:17:56.100 "uuid": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:56.100 "assigned_rate_limits": { 00:17:56.100 "rw_ios_per_sec": 0, 00:17:56.100 "rw_mbytes_per_sec": 0, 00:17:56.100 "r_mbytes_per_sec": 0, 00:17:56.100 "w_mbytes_per_sec": 0 00:17:56.100 }, 00:17:56.100 "claimed": false, 00:17:56.100 "zoned": false, 00:17:56.100 "supported_io_types": { 00:17:56.100 "read": true, 00:17:56.100 "write": true, 00:17:56.100 "unmap": true, 00:17:56.100 "flush": false, 00:17:56.100 "reset": true, 00:17:56.100 "nvme_admin": false, 00:17:56.100 "nvme_io": false, 00:17:56.100 "nvme_io_md": false, 00:17:56.100 "write_zeroes": true, 00:17:56.100 "zcopy": false, 00:17:56.100 "get_zone_info": false, 00:17:56.100 "zone_management": false, 00:17:56.100 "zone_append": false, 00:17:56.100 "compare": false, 00:17:56.100 "compare_and_write": false, 00:17:56.100 "abort": false, 00:17:56.100 "seek_hole": true, 00:17:56.100 "seek_data": true, 00:17:56.100 "copy": false, 00:17:56.100 "nvme_iov_md": false 00:17:56.100 }, 00:17:56.100 "driver_specific": { 00:17:56.100 "lvol": { 00:17:56.100 "lvol_store_uuid": "d839b0f1-d379-45d7-9ef8-0b19af87f959", 00:17:56.100 "base_bdev": "nvme0n1", 00:17:56.100 "thin_provision": true, 00:17:56.100 "num_allocated_clusters": 0, 00:17:56.100 "snapshot": false, 00:17:56.100 "clone": false, 00:17:56.100 "esnap_clone": false 00:17:56.100 } 00:17:56.100 } 00:17:56.100 } 00:17:56.100 ]' 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:56.100 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:56.101 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:56.101 20:01:40 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:56.101 20:01:40 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:56.362 20:01:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:56.362 20:01:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:56.362 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:56.362 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:56.362 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:56.362 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:56.362 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e4da945-c7d6-4398-a753-e46d1e55e1d5 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:56.633 { 00:17:56.633 "name": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:56.633 "aliases": [ 00:17:56.633 "lvs/nvme0n1p0" 00:17:56.633 ], 00:17:56.633 "product_name": "Logical Volume", 00:17:56.633 "block_size": 4096, 00:17:56.633 "num_blocks": 26476544, 00:17:56.633 "uuid": "0e4da945-c7d6-4398-a753-e46d1e55e1d5", 00:17:56.633 "assigned_rate_limits": { 00:17:56.633 "rw_ios_per_sec": 0, 00:17:56.633 "rw_mbytes_per_sec": 0, 00:17:56.633 "r_mbytes_per_sec": 0, 00:17:56.633 "w_mbytes_per_sec": 0 00:17:56.633 }, 00:17:56.633 "claimed": false, 00:17:56.633 "zoned": false, 00:17:56.633 "supported_io_types": { 00:17:56.633 "read": true, 00:17:56.633 "write": true, 00:17:56.633 "unmap": true, 00:17:56.633 "flush": false, 00:17:56.633 "reset": true, 00:17:56.633 "nvme_admin": false, 00:17:56.633 "nvme_io": false, 00:17:56.633 "nvme_io_md": false, 00:17:56.633 "write_zeroes": true, 00:17:56.633 "zcopy": false, 00:17:56.633 "get_zone_info": false, 00:17:56.633 "zone_management": false, 00:17:56.633 "zone_append": false, 00:17:56.633 "compare": false, 00:17:56.633 "compare_and_write": false, 00:17:56.633 "abort": false, 00:17:56.633 "seek_hole": true, 00:17:56.633 "seek_data": true, 00:17:56.633 "copy": false, 00:17:56.633 "nvme_iov_md": false 00:17:56.633 }, 00:17:56.633 "driver_specific": { 00:17:56.633 "lvol": { 00:17:56.633 "lvol_store_uuid": "d839b0f1-d379-45d7-9ef8-0b19af87f959", 00:17:56.633 "base_bdev": "nvme0n1", 00:17:56.633 "thin_provision": true, 00:17:56.633 "num_allocated_clusters": 0, 00:17:56.633 "snapshot": false, 00:17:56.633 "clone": false, 00:17:56.633 "esnap_clone": false 00:17:56.633 } 00:17:56.633 } 00:17:56.633 } 00:17:56.633 ]' 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:56.633 20:01:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0e4da945-c7d6-4398-a753-e46d1e55e1d5 --l2p_dram_limit 10' 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:56.633 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:56.633 20:01:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0e4da945-c7d6-4398-a753-e46d1e55e1d5 --l2p_dram_limit 10 -c nvc0n1p0 00:17:56.916 [2024-09-30 20:01:41.103965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.104018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.916 [2024-09-30 20:01:41.104032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.916 [2024-09-30 20:01:41.104040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.916 [2024-09-30 20:01:41.104094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.104102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.916 [2024-09-30 20:01:41.104110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:56.916 [2024-09-30 20:01:41.104116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.916 [2024-09-30 20:01:41.104140] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.916 [2024-09-30 20:01:41.105029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.916 [2024-09-30 20:01:41.105065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.105074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.916 [2024-09-30 20:01:41.105084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:56.916 [2024-09-30 20:01:41.105092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.916 [2024-09-30 20:01:41.105168] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:17:56.916 [2024-09-30 20:01:41.106464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.106583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:56.916 [2024-09-30 20:01:41.106596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:56.916 [2024-09-30 20:01:41.106607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.916 [2024-09-30 20:01:41.113381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.113486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.916 [2024-09-30 20:01:41.113498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.728 ms 00:17:56.916 [2024-09-30 20:01:41.113506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.916 [2024-09-30 20:01:41.113579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.916 [2024-09-30 20:01:41.113588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.916 [2024-09-30 20:01:41.113595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:56.916 [2024-09-30 20:01:41.113608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.113652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.917 [2024-09-30 20:01:41.113662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.917 [2024-09-30 20:01:41.113668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:56.917 [2024-09-30 20:01:41.113676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.113695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.917 [2024-09-30 20:01:41.116971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.917 [2024-09-30 20:01:41.117065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.917 [2024-09-30 20:01:41.117079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:17:56.917 [2024-09-30 20:01:41.117086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.117118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.917 [2024-09-30 20:01:41.117125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.917 [2024-09-30 20:01:41.117133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:56.917 [2024-09-30 20:01:41.117141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.117162] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:56.917 [2024-09-30 20:01:41.117288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.917 [2024-09-30 20:01:41.117301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.917 [2024-09-30 20:01:41.117310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:56.917 [2024-09-30 20:01:41.117323] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117337] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:56.917 [2024-09-30 20:01:41.117343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.917 [2024-09-30 20:01:41.117351] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.917 [2024-09-30 20:01:41.117357] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.917 [2024-09-30 20:01:41.117365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.917 [2024-09-30 20:01:41.117376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.917 [2024-09-30 20:01:41.117383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:17:56.917 [2024-09-30 20:01:41.117389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.117458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.917 [2024-09-30 20:01:41.117467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.917 [2024-09-30 20:01:41.117474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:56.917 [2024-09-30 20:01:41.117480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.917 [2024-09-30 20:01:41.117556] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.917 [2024-09-30 20:01:41.117564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.917 [2024-09-30 20:01:41.117572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.917 [2024-09-30 20:01:41.117591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.917 [2024-09-30 20:01:41.117609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.917 [2024-09-30 20:01:41.117621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.917 [2024-09-30 20:01:41.117626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:56.917 [2024-09-30 20:01:41.117634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.917 [2024-09-30 20:01:41.117639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.917 [2024-09-30 20:01:41.117647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:56.917 [2024-09-30 20:01:41.117651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.917 [2024-09-30 20:01:41.117668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.917 [2024-09-30 20:01:41.117686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.917 [2024-09-30 20:01:41.117704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.917 [2024-09-30 20:01:41.117722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.917 [2024-09-30 20:01:41.117738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.917 [2024-09-30 20:01:41.117758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.917 [2024-09-30 20:01:41.117769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.917 [2024-09-30 20:01:41.117774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:56.917 [2024-09-30 20:01:41.117780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.917 [2024-09-30 20:01:41.117785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.917 [2024-09-30 20:01:41.117791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:56.917 [2024-09-30 20:01:41.117796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.917 [2024-09-30 20:01:41.117807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:56.917 [2024-09-30 20:01:41.117814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117819] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.917 [2024-09-30 20:01:41.117826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.917 [2024-09-30 20:01:41.117833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.917 [2024-09-30 20:01:41.117847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.917 [2024-09-30 20:01:41.117857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.917 [2024-09-30 20:01:41.117863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.917 [2024-09-30 20:01:41.117870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.917 [2024-09-30 20:01:41.117875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.917 [2024-09-30 20:01:41.117898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.917 [2024-09-30 20:01:41.117907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.917 [2024-09-30 20:01:41.117916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.917 [2024-09-30 20:01:41.117922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:56.917 [2024-09-30 20:01:41.117930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:56.918 [2024-09-30 20:01:41.117936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:56.918 [2024-09-30 20:01:41.117943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:56.918 [2024-09-30 20:01:41.117949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:56.918 [2024-09-30 20:01:41.117955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:56.918 [2024-09-30 20:01:41.117961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:56.918 [2024-09-30 20:01:41.117968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:56.918 [2024-09-30 20:01:41.117973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:56.918 [2024-09-30 20:01:41.117982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.117987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.117994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.118000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.118007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:56.918 [2024-09-30 20:01:41.118012] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.918 [2024-09-30 20:01:41.118020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.118026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.918 [2024-09-30 20:01:41.118034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.918 [2024-09-30 20:01:41.118040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.918 [2024-09-30 20:01:41.118048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.918 [2024-09-30 20:01:41.118054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.918 [2024-09-30 20:01:41.118061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.918 [2024-09-30 20:01:41.118067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:56.918 [2024-09-30 20:01:41.118074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.918 [2024-09-30 20:01:41.118118] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:56.918 [2024-09-30 20:01:41.118130] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:59.465 [2024-09-30 20:01:43.202140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.202414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:59.465 [2024-09-30 20:01:43.202498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2084.012 ms 00:17:59.465 [2024-09-30 20:01:43.202529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.231646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.231858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.465 [2024-09-30 20:01:43.231926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.702 ms 00:17:59.465 [2024-09-30 20:01:43.231955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.232110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.232287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:59.465 [2024-09-30 20:01:43.232313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:59.465 [2024-09-30 20:01:43.232344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.273520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.273706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.465 [2024-09-30 20:01:43.273785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.119 ms 00:17:59.465 [2024-09-30 20:01:43.273817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.273874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.273930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.465 [2024-09-30 20:01:43.274044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.465 [2024-09-30 20:01:43.274096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.274628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.274758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.465 [2024-09-30 20:01:43.274823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:17:59.465 [2024-09-30 20:01:43.274856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.274991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.275020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.465 [2024-09-30 20:01:43.275089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:59.465 [2024-09-30 20:01:43.275123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.290849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.290965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.465 [2024-09-30 20:01:43.291052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:17:59.465 [2024-09-30 20:01:43.291078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.303537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:59.465 [2024-09-30 20:01:43.306923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.307025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:59.465 [2024-09-30 20:01:43.307080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.754 ms 00:17:59.465 [2024-09-30 20:01:43.307102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.375315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.375442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:59.465 [2024-09-30 20:01:43.375504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.152 ms 00:17:59.465 [2024-09-30 20:01:43.375528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.375719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.375785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:59.465 [2024-09-30 20:01:43.375814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:59.465 [2024-09-30 20:01:43.375834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.400760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.400876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:59.465 [2024-09-30 20:01:43.400927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:17:59.465 [2024-09-30 20:01:43.400950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.432291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.432426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:59.465 [2024-09-30 20:01:43.432501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.875 ms 00:17:59.465 [2024-09-30 20:01:43.432523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.433092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.433168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:59.465 [2024-09-30 20:01:43.433343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:59.465 [2024-09-30 20:01:43.433409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.505309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.505429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:59.465 [2024-09-30 20:01:43.505487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.845 ms 00:17:59.465 [2024-09-30 20:01:43.505512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.530365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.530473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:59.465 [2024-09-30 20:01:43.530527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.772 ms 00:17:59.465 [2024-09-30 20:01:43.530550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.554274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.554374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:59.465 [2024-09-30 20:01:43.554425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.648 ms 00:17:59.465 [2024-09-30 20:01:43.554446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.577770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.577910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:59.465 [2024-09-30 20:01:43.577932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.072 ms 00:17:59.465 [2024-09-30 20:01:43.577941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.577980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.577990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:59.465 [2024-09-30 20:01:43.578007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:59.465 [2024-09-30 20:01:43.578014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.578108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.465 [2024-09-30 20:01:43.578119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:59.465 [2024-09-30 20:01:43.578129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:59.465 [2024-09-30 20:01:43.578138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.465 [2024-09-30 20:01:43.579089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2474.673 ms, result 0 00:17:59.465 { 00:17:59.465 "name": "ftl0", 00:17:59.465 "uuid": "fe2e57ce-865f-47c0-bf7b-dac67aca0b50" 00:17:59.465 } 00:17:59.465 20:01:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:59.465 20:01:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.465 20:01:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:59.465 20:01:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.725 [2024-09-30 20:01:43.998581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:43.998624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.725 [2024-09-30 20:01:43.998636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.725 [2024-09-30 20:01:43.998646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:43.998672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.725 [2024-09-30 20:01:44.001378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.001405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.725 [2024-09-30 20:01:44.001425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:17:59.725 [2024-09-30 20:01:44.001433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.001706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.001721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.725 [2024-09-30 20:01:44.001732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:59.725 [2024-09-30 20:01:44.001739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.004978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.004997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.725 [2024-09-30 20:01:44.005009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:17:59.725 [2024-09-30 20:01:44.005020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.011256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.011292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.725 [2024-09-30 20:01:44.011304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:17:59.725 [2024-09-30 20:01:44.011313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.035332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.035363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.725 [2024-09-30 20:01:44.035376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.964 ms 00:17:59.725 [2024-09-30 20:01:44.035383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.050778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.050880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.725 [2024-09-30 20:01:44.050898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.357 ms 00:17:59.725 [2024-09-30 20:01:44.050905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.051019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.051029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.725 [2024-09-30 20:01:44.051038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:59.725 [2024-09-30 20:01:44.051045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.068821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.068846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.725 [2024-09-30 20:01:44.068855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:17:59.725 [2024-09-30 20:01:44.068861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.725 [2024-09-30 20:01:44.086259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.725 [2024-09-30 20:01:44.086288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.725 [2024-09-30 20:01:44.086298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.369 ms 00:17:59.725 [2024-09-30 20:01:44.086304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.986 [2024-09-30 20:01:44.103437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.986 [2024-09-30 20:01:44.103461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.986 [2024-09-30 20:01:44.103470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.102 ms 00:17:59.986 [2024-09-30 20:01:44.103476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.986 [2024-09-30 20:01:44.120392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.986 [2024-09-30 20:01:44.120483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.986 [2024-09-30 20:01:44.120498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.859 ms 00:17:59.986 [2024-09-30 20:01:44.120504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.986 [2024-09-30 20:01:44.120529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.986 [2024-09-30 20:01:44.120540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.986 [2024-09-30 20:01:44.120972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.120979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.120985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.120992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.120998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.987 [2024-09-30 20:01:44.121217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.987 [2024-09-30 20:01:44.121227] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:17:59.987 [2024-09-30 20:01:44.121233] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.987 [2024-09-30 20:01:44.121241] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.987 [2024-09-30 20:01:44.121247] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.987 [2024-09-30 20:01:44.121254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.987 [2024-09-30 20:01:44.121260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.987 [2024-09-30 20:01:44.121282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.987 [2024-09-30 20:01:44.121290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.987 [2024-09-30 20:01:44.121297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.987 [2024-09-30 20:01:44.121302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.987 [2024-09-30 20:01:44.121309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.987 [2024-09-30 20:01:44.121316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.987 [2024-09-30 20:01:44.121324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:17:59.987 [2024-09-30 20:01:44.121329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.131039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.987 [2024-09-30 20:01:44.131064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.987 [2024-09-30 20:01:44.131073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.683 ms 00:17:59.987 [2024-09-30 20:01:44.131079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.131392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.987 [2024-09-30 20:01:44.131400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.987 [2024-09-30 20:01:44.131409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:59.987 [2024-09-30 20:01:44.131415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.162128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.162164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.987 [2024-09-30 20:01:44.162175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.162184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.162240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.162246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.987 [2024-09-30 20:01:44.162254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.162260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.162332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.162341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.987 [2024-09-30 20:01:44.162349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.162355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.162375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.162381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.987 [2024-09-30 20:01:44.162389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.162395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.224416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.224462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.987 [2024-09-30 20:01:44.224474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.224481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.987 [2024-09-30 20:01:44.274477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.987 [2024-09-30 20:01:44.274589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.987 [2024-09-30 20:01:44.274655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.987 [2024-09-30 20:01:44.274751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.987 [2024-09-30 20:01:44.274804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.987 [2024-09-30 20:01:44.274860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.987 [2024-09-30 20:01:44.274866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.987 [2024-09-30 20:01:44.274909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.987 [2024-09-30 20:01:44.274918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.988 [2024-09-30 20:01:44.274926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.988 [2024-09-30 20:01:44.274933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.988 [2024-09-30 20:01:44.275051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.437 ms, result 0 00:17:59.988 true 00:17:59.988 20:01:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74590 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74590 ']' 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74590 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74590 00:17:59.988 killing process with pid 74590 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74590' 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74590 00:17:59.988 20:01:44 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74590 00:18:06.559 20:01:50 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:09.843 262144+0 records in 00:18:09.844 262144+0 records out 00:18:09.844 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.84322 s, 279 MB/s 00:18:09.844 20:01:54 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:11.741 20:01:56 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:12.000 [2024-09-30 20:01:56.136056] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:12.000 [2024-09-30 20:01:56.136175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74805 ] 00:18:12.000 [2024-09-30 20:01:56.286328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.258 [2024-09-30 20:01:56.498978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.516 [2024-09-30 20:01:56.770851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.516 [2024-09-30 20:01:56.770932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.774 [2024-09-30 20:01:56.925819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.774 [2024-09-30 20:01:56.925891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.774 [2024-09-30 20:01:56.925913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.774 [2024-09-30 20:01:56.925926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.774 [2024-09-30 20:01:56.925980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.774 [2024-09-30 20:01:56.925991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.774 [2024-09-30 20:01:56.926000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:12.774 [2024-09-30 20:01:56.926007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.774 [2024-09-30 20:01:56.926028] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.774 [2024-09-30 20:01:56.926719] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.774 [2024-09-30 20:01:56.926741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.774 [2024-09-30 20:01:56.926749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.774 [2024-09-30 20:01:56.926758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:18:12.774 [2024-09-30 20:01:56.926766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.774 [2024-09-30 20:01:56.928207] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.774 [2024-09-30 20:01:56.941246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.774 [2024-09-30 20:01:56.941303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.774 [2024-09-30 20:01:56.941317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.040 ms 00:18:12.774 [2024-09-30 20:01:56.941325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.774 [2024-09-30 20:01:56.941392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.774 [2024-09-30 20:01:56.941402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.775 [2024-09-30 20:01:56.941411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:12.775 [2024-09-30 20:01:56.941418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.948128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.948326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.775 [2024-09-30 20:01:56.948344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.640 ms 00:18:12.775 [2024-09-30 20:01:56.948352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.948435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.948444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.775 [2024-09-30 20:01:56.948454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:12.775 [2024-09-30 20:01:56.948461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.948517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.948527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.775 [2024-09-30 20:01:56.948535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:12.775 [2024-09-30 20:01:56.948543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.948568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.775 [2024-09-30 20:01:56.952166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.952287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.775 [2024-09-30 20:01:56.952302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:18:12.775 [2024-09-30 20:01:56.952311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.952344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.952353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.775 [2024-09-30 20:01:56.952361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:12.775 [2024-09-30 20:01:56.952368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.952401] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.775 [2024-09-30 20:01:56.952421] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.775 [2024-09-30 20:01:56.952458] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.775 [2024-09-30 20:01:56.952474] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:12.775 [2024-09-30 20:01:56.952580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.775 [2024-09-30 20:01:56.952590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.775 [2024-09-30 20:01:56.952601] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:12.775 [2024-09-30 20:01:56.952614] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.775 [2024-09-30 20:01:56.952623] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.775 [2024-09-30 20:01:56.952632] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:12.775 [2024-09-30 20:01:56.952640] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.775 [2024-09-30 20:01:56.952648] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.775 [2024-09-30 20:01:56.952656] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.775 [2024-09-30 20:01:56.952664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.952672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.775 [2024-09-30 20:01:56.952679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:18:12.775 [2024-09-30 20:01:56.952687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.952780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.775 [2024-09-30 20:01:56.952791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.775 [2024-09-30 20:01:56.952799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:12.775 [2024-09-30 20:01:56.952806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.775 [2024-09-30 20:01:56.952909] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.775 [2024-09-30 20:01:56.952919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.775 [2024-09-30 20:01:56.952928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.775 [2024-09-30 20:01:56.952936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.775 [2024-09-30 20:01:56.952944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.775 [2024-09-30 20:01:56.952951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.775 [2024-09-30 20:01:56.952958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:12.775 [2024-09-30 20:01:56.952964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.775 [2024-09-30 20:01:56.952972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:12.775 [2024-09-30 20:01:56.952979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.775 [2024-09-30 20:01:56.952986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.775 [2024-09-30 20:01:56.952994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:12.775 [2024-09-30 20:01:56.953000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.775 [2024-09-30 20:01:56.953013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.775 [2024-09-30 20:01:56.953020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:12.775 [2024-09-30 20:01:56.953026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.775 [2024-09-30 20:01:56.953039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:12.775 [2024-09-30 20:01:56.953046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.775 [2024-09-30 20:01:56.953062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.775 [2024-09-30 20:01:56.953075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.775 [2024-09-30 20:01:56.953082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.775 [2024-09-30 20:01:56.953096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.775 [2024-09-30 20:01:56.953102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.775 [2024-09-30 20:01:56.953115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.775 [2024-09-30 20:01:56.953121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.775 [2024-09-30 20:01:56.953135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.775 [2024-09-30 20:01:56.953141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.775 [2024-09-30 20:01:56.953154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.775 [2024-09-30 20:01:56.953161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:12.775 [2024-09-30 20:01:56.953168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.775 [2024-09-30 20:01:56.953175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.775 [2024-09-30 20:01:56.953181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:12.775 [2024-09-30 20:01:56.953188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.775 [2024-09-30 20:01:56.953195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.776 [2024-09-30 20:01:56.953202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:12.776 [2024-09-30 20:01:56.953209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.776 [2024-09-30 20:01:56.953216] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.776 [2024-09-30 20:01:56.953224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.776 [2024-09-30 20:01:56.953233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.776 [2024-09-30 20:01:56.953241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.776 [2024-09-30 20:01:56.953248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.776 [2024-09-30 20:01:56.953255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.776 [2024-09-30 20:01:56.953261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.776 [2024-09-30 20:01:56.953279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.776 [2024-09-30 20:01:56.953288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.776 [2024-09-30 20:01:56.953295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.776 [2024-09-30 20:01:56.953304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.776 [2024-09-30 20:01:56.953315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:12.776 [2024-09-30 20:01:56.953332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:12.776 [2024-09-30 20:01:56.953339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:12.776 [2024-09-30 20:01:56.953346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:12.776 [2024-09-30 20:01:56.953354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:12.776 [2024-09-30 20:01:56.953361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:12.776 [2024-09-30 20:01:56.953369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:12.776 [2024-09-30 20:01:56.953376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:12.776 [2024-09-30 20:01:56.953383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:12.776 [2024-09-30 20:01:56.953390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:12.776 [2024-09-30 20:01:56.953426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.776 [2024-09-30 20:01:56.953434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.776 [2024-09-30 20:01:56.953450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.776 [2024-09-30 20:01:56.953457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.776 [2024-09-30 20:01:56.953464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.776 [2024-09-30 20:01:56.953472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:56.953485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.776 [2024-09-30 20:01:56.953492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:18:12.776 [2024-09-30 20:01:56.953500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:56.994144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:56.994200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.776 [2024-09-30 20:01:56.994213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.590 ms 00:18:12.776 [2024-09-30 20:01:56.994222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:56.994361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:56.994374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.776 [2024-09-30 20:01:56.994383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:12.776 [2024-09-30 20:01:56.994391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.026792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.026838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.776 [2024-09-30 20:01:57.026854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.326 ms 00:18:12.776 [2024-09-30 20:01:57.026862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.026911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.026920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.776 [2024-09-30 20:01:57.026929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:12.776 [2024-09-30 20:01:57.026937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.027410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.027442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.776 [2024-09-30 20:01:57.027451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:12.776 [2024-09-30 20:01:57.027463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.027602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.027612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.776 [2024-09-30 20:01:57.027621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:12.776 [2024-09-30 20:01:57.027629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.040896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.040929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.776 [2024-09-30 20:01:57.040939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:18:12.776 [2024-09-30 20:01:57.040947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.053812] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:12.776 [2024-09-30 20:01:57.053846] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.776 [2024-09-30 20:01:57.053858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.053867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.776 [2024-09-30 20:01:57.053876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.808 ms 00:18:12.776 [2024-09-30 20:01:57.053883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.078227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.078425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.776 [2024-09-30 20:01:57.078442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.296 ms 00:18:12.776 [2024-09-30 20:01:57.078451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.089959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.090072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.776 [2024-09-30 20:01:57.090087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.482 ms 00:18:12.776 [2024-09-30 20:01:57.090096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.101222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.101342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.776 [2024-09-30 20:01:57.101357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.096 ms 00:18:12.776 [2024-09-30 20:01:57.101365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.776 [2024-09-30 20:01:57.102018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.776 [2024-09-30 20:01:57.102044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.776 [2024-09-30 20:01:57.102054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:18:12.776 [2024-09-30 20:01:57.102063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.035 [2024-09-30 20:01:57.161009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.035 [2024-09-30 20:01:57.161225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.035 [2024-09-30 20:01:57.161245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.926 ms 00:18:13.035 [2024-09-30 20:01:57.161254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.035 [2024-09-30 20:01:57.172554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:13.035 [2024-09-30 20:01:57.175572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.035 [2024-09-30 20:01:57.175712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.035 [2024-09-30 20:01:57.175730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:18:13.035 [2024-09-30 20:01:57.175738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.035 [2024-09-30 20:01:57.175869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.035 [2024-09-30 20:01:57.175880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.035 [2024-09-30 20:01:57.175890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:13.035 [2024-09-30 20:01:57.175898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.035 [2024-09-30 20:01:57.175969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.035 [2024-09-30 20:01:57.175980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.035 [2024-09-30 20:01:57.175989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:13.035 [2024-09-30 20:01:57.175997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.036 [2024-09-30 20:01:57.176017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.036 [2024-09-30 20:01:57.176029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.036 [2024-09-30 20:01:57.176037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:13.036 [2024-09-30 20:01:57.176044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.036 [2024-09-30 20:01:57.176079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.036 [2024-09-30 20:01:57.176089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.036 [2024-09-30 20:01:57.176097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.036 [2024-09-30 20:01:57.176105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:13.036 [2024-09-30 20:01:57.176116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.036 [2024-09-30 20:01:57.200712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.036 [2024-09-30 20:01:57.200759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.036 [2024-09-30 20:01:57.200772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.577 ms 00:18:13.036 [2024-09-30 20:01:57.200781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.036 [2024-09-30 20:01:57.200864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.036 [2024-09-30 20:01:57.200875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.036 [2024-09-30 20:01:57.200884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:13.036 [2024-09-30 20:01:57.200892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.036 [2024-09-30 20:01:57.201999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.703 ms, result 0 00:18:36.116  Copying: 43/1024 [MB] (43 MBps) Copying: 91/1024 [MB] (48 MBps) Copying: 135/1024 [MB] (44 MBps) Copying: 178/1024 [MB] (42 MBps) Copying: 221/1024 [MB] (42 MBps) Copying: 264/1024 [MB] (43 MBps) Copying: 309/1024 [MB] (45 MBps) Copying: 353/1024 [MB] (44 MBps) Copying: 398/1024 [MB] (44 MBps) Copying: 443/1024 [MB] (45 MBps) Copying: 486/1024 [MB] (43 MBps) Copying: 529/1024 [MB] (43 MBps) Copying: 573/1024 [MB] (43 MBps) Copying: 617/1024 [MB] (43 MBps) Copying: 660/1024 [MB] (43 MBps) Copying: 704/1024 [MB] (43 MBps) Copying: 749/1024 [MB] (44 MBps) Copying: 794/1024 [MB] (45 MBps) Copying: 838/1024 [MB] (43 MBps) Copying: 881/1024 [MB] (42 MBps) Copying: 925/1024 [MB] (44 MBps) Copying: 969/1024 [MB] (43 MBps) Copying: 1013/1024 [MB] (44 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-09-30 20:02:20.448005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.116 [2024-09-30 20:02:20.448065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:36.116 [2024-09-30 20:02:20.448080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:36.116 [2024-09-30 20:02:20.448089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.116 [2024-09-30 20:02:20.448113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.116 [2024-09-30 20:02:20.450980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.116 [2024-09-30 20:02:20.451015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:36.116 [2024-09-30 20:02:20.451025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:18:36.116 [2024-09-30 20:02:20.451033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.116 [2024-09-30 20:02:20.452436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.116 [2024-09-30 20:02:20.452618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:36.116 [2024-09-30 20:02:20.452635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:18:36.116 [2024-09-30 20:02:20.452643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.116 [2024-09-30 20:02:20.465817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.116 [2024-09-30 20:02:20.465854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:36.116 [2024-09-30 20:02:20.465864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.156 ms 00:18:36.116 [2024-09-30 20:02:20.465872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.116 [2024-09-30 20:02:20.472172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.116 [2024-09-30 20:02:20.472312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:36.116 [2024-09-30 20:02:20.472328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.270 ms 00:18:36.116 [2024-09-30 20:02:20.472337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.497307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.497341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:36.376 [2024-09-30 20:02:20.497351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.900 ms 00:18:36.376 [2024-09-30 20:02:20.497359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.512019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.512155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:36.376 [2024-09-30 20:02:20.512176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:18:36.376 [2024-09-30 20:02:20.512185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.512320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.512330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:36.376 [2024-09-30 20:02:20.512339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:36.376 [2024-09-30 20:02:20.512347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.536249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.536296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:36.376 [2024-09-30 20:02:20.536307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.887 ms 00:18:36.376 [2024-09-30 20:02:20.536314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.559750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.559783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:36.376 [2024-09-30 20:02:20.559793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.402 ms 00:18:36.376 [2024-09-30 20:02:20.559799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.582647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.582781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:36.376 [2024-09-30 20:02:20.582798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.814 ms 00:18:36.376 [2024-09-30 20:02:20.582805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.605716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.376 [2024-09-30 20:02:20.605854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:36.376 [2024-09-30 20:02:20.605869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.859 ms 00:18:36.376 [2024-09-30 20:02:20.605877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.376 [2024-09-30 20:02:20.605923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:36.376 [2024-09-30 20:02:20.605939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.605963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.605976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.605989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:36.376 [2024-09-30 20:02:20.606527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:36.377 [2024-09-30 20:02:20.606821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:36.377 [2024-09-30 20:02:20.606829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:18:36.377 [2024-09-30 20:02:20.606836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:36.377 [2024-09-30 20:02:20.606843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:36.377 [2024-09-30 20:02:20.606850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:36.377 [2024-09-30 20:02:20.606858] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:36.377 [2024-09-30 20:02:20.606865] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:36.377 [2024-09-30 20:02:20.606872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:36.377 [2024-09-30 20:02:20.606884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:36.377 [2024-09-30 20:02:20.606891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:36.377 [2024-09-30 20:02:20.606897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:36.377 [2024-09-30 20:02:20.606904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.377 [2024-09-30 20:02:20.606911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:36.377 [2024-09-30 20:02:20.606925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:18:36.377 [2024-09-30 20:02:20.606933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.619930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.377 [2024-09-30 20:02:20.619963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:36.377 [2024-09-30 20:02:20.619973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.981 ms 00:18:36.377 [2024-09-30 20:02:20.619981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.620377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.377 [2024-09-30 20:02:20.620389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:36.377 [2024-09-30 20:02:20.620397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:18:36.377 [2024-09-30 20:02:20.620405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.650973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.377 [2024-09-30 20:02:20.651007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.377 [2024-09-30 20:02:20.651017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.377 [2024-09-30 20:02:20.651028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.651082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.377 [2024-09-30 20:02:20.651090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.377 [2024-09-30 20:02:20.651098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.377 [2024-09-30 20:02:20.651105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.651157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.377 [2024-09-30 20:02:20.651168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.377 [2024-09-30 20:02:20.651176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.377 [2024-09-30 20:02:20.651183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.651201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.377 [2024-09-30 20:02:20.651209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.377 [2024-09-30 20:02:20.651217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.377 [2024-09-30 20:02:20.651224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.377 [2024-09-30 20:02:20.733411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.377 [2024-09-30 20:02:20.733457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.377 [2024-09-30 20:02:20.733469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.377 [2024-09-30 20:02:20.733478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.800559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.800743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.636 [2024-09-30 20:02:20.800762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.800772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.800857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.800867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.636 [2024-09-30 20:02:20.800876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.800885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.800920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.800935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.636 [2024-09-30 20:02:20.800943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.800951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.801046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.801057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.636 [2024-09-30 20:02:20.801065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.801073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.801102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.801112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:36.636 [2024-09-30 20:02:20.801123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.801131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.801169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.801179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.636 [2024-09-30 20:02:20.801187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.801195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.801239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.636 [2024-09-30 20:02:20.801252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.636 [2024-09-30 20:02:20.801261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.636 [2024-09-30 20:02:20.801295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.636 [2024-09-30 20:02:20.801417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.382 ms, result 0 00:18:38.539 00:18:38.539 00:18:38.539 20:02:22 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:38.539 [2024-09-30 20:02:22.771508] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:38.539 [2024-09-30 20:02:22.771639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:18:38.797 [2024-09-30 20:02:22.915127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.797 [2024-09-30 20:02:23.123004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.056 [2024-09-30 20:02:23.392378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.056 [2024-09-30 20:02:23.392451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.316 [2024-09-30 20:02:23.547096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.547148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:39.316 [2024-09-30 20:02:23.547162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:39.316 [2024-09-30 20:02:23.547175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.547219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.547230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.316 [2024-09-30 20:02:23.547239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:39.316 [2024-09-30 20:02:23.547246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.547284] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:39.316 [2024-09-30 20:02:23.547984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:39.316 [2024-09-30 20:02:23.548006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.548014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.316 [2024-09-30 20:02:23.548023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:39.316 [2024-09-30 20:02:23.548032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.549355] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:39.316 [2024-09-30 20:02:23.562032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.562064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:39.316 [2024-09-30 20:02:23.562076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:18:39.316 [2024-09-30 20:02:23.562083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.562135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.562144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:39.316 [2024-09-30 20:02:23.562153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:39.316 [2024-09-30 20:02:23.562160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.568519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.568683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.316 [2024-09-30 20:02:23.568699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.303 ms 00:18:39.316 [2024-09-30 20:02:23.568708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.568787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.568797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.316 [2024-09-30 20:02:23.568805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:39.316 [2024-09-30 20:02:23.568813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.568858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.568868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:39.316 [2024-09-30 20:02:23.568876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:39.316 [2024-09-30 20:02:23.568884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.568905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:39.316 [2024-09-30 20:02:23.572403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.572431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.316 [2024-09-30 20:02:23.572441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:18:39.316 [2024-09-30 20:02:23.572449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.572478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.572487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:39.316 [2024-09-30 20:02:23.572496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:39.316 [2024-09-30 20:02:23.572503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.572534] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:39.316 [2024-09-30 20:02:23.572553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:39.316 [2024-09-30 20:02:23.572590] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:39.316 [2024-09-30 20:02:23.572605] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:39.316 [2024-09-30 20:02:23.572711] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:39.316 [2024-09-30 20:02:23.572723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:39.316 [2024-09-30 20:02:23.572733] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:39.316 [2024-09-30 20:02:23.572746] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:39.316 [2024-09-30 20:02:23.572755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:39.316 [2024-09-30 20:02:23.572763] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:39.316 [2024-09-30 20:02:23.572771] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:39.316 [2024-09-30 20:02:23.572778] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:39.316 [2024-09-30 20:02:23.572785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:39.316 [2024-09-30 20:02:23.572793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.572801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:39.316 [2024-09-30 20:02:23.572810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:39.316 [2024-09-30 20:02:23.572817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.572900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.316 [2024-09-30 20:02:23.572911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:39.316 [2024-09-30 20:02:23.572919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:39.316 [2024-09-30 20:02:23.572926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.316 [2024-09-30 20:02:23.573039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:39.316 [2024-09-30 20:02:23.573050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:39.316 [2024-09-30 20:02:23.573058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.316 [2024-09-30 20:02:23.573066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:39.316 [2024-09-30 20:02:23.573082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:39.316 [2024-09-30 20:02:23.573097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:39.316 [2024-09-30 20:02:23.573104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.316 [2024-09-30 20:02:23.573117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:39.316 [2024-09-30 20:02:23.573124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:39.316 [2024-09-30 20:02:23.573131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.316 [2024-09-30 20:02:23.573143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:39.316 [2024-09-30 20:02:23.573150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:39.316 [2024-09-30 20:02:23.573157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:39.316 [2024-09-30 20:02:23.573170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:39.316 [2024-09-30 20:02:23.573177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:39.316 [2024-09-30 20:02:23.573191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.316 [2024-09-30 20:02:23.573205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:39.316 [2024-09-30 20:02:23.573212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:39.316 [2024-09-30 20:02:23.573218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.317 [2024-09-30 20:02:23.573225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:39.317 [2024-09-30 20:02:23.573232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.317 [2024-09-30 20:02:23.573244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:39.317 [2024-09-30 20:02:23.573250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.317 [2024-09-30 20:02:23.573263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:39.317 [2024-09-30 20:02:23.573289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.317 [2024-09-30 20:02:23.573303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:39.317 [2024-09-30 20:02:23.573311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:39.317 [2024-09-30 20:02:23.573318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.317 [2024-09-30 20:02:23.573326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:39.317 [2024-09-30 20:02:23.573332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:39.317 [2024-09-30 20:02:23.573343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:39.317 [2024-09-30 20:02:23.573357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:39.317 [2024-09-30 20:02:23.573363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573370] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:39.317 [2024-09-30 20:02:23.573378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:39.317 [2024-09-30 20:02:23.573388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.317 [2024-09-30 20:02:23.573395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.317 [2024-09-30 20:02:23.573403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:39.317 [2024-09-30 20:02:23.573410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:39.317 [2024-09-30 20:02:23.573416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:39.317 [2024-09-30 20:02:23.573423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:39.317 [2024-09-30 20:02:23.573430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:39.317 [2024-09-30 20:02:23.573437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:39.317 [2024-09-30 20:02:23.573447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:39.317 [2024-09-30 20:02:23.573457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:39.317 [2024-09-30 20:02:23.573473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:39.317 [2024-09-30 20:02:23.573480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:39.317 [2024-09-30 20:02:23.573487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:39.317 [2024-09-30 20:02:23.573494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:39.317 [2024-09-30 20:02:23.573501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:39.317 [2024-09-30 20:02:23.573508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:39.317 [2024-09-30 20:02:23.573515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:39.317 [2024-09-30 20:02:23.573523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:39.317 [2024-09-30 20:02:23.573530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:39.317 [2024-09-30 20:02:23.573565] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:39.317 [2024-09-30 20:02:23.573573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:39.317 [2024-09-30 20:02:23.573588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:39.317 [2024-09-30 20:02:23.573597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:39.317 [2024-09-30 20:02:23.573604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:39.317 [2024-09-30 20:02:23.573611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.573619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:39.317 [2024-09-30 20:02:23.573628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:18:39.317 [2024-09-30 20:02:23.573635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.619735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.619781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.317 [2024-09-30 20:02:23.619794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.055 ms 00:18:39.317 [2024-09-30 20:02:23.619803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.619901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.619911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:39.317 [2024-09-30 20:02:23.619920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:39.317 [2024-09-30 20:02:23.619929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.652371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.652406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.317 [2024-09-30 20:02:23.652420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.383 ms 00:18:39.317 [2024-09-30 20:02:23.652429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.652461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.652470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.317 [2024-09-30 20:02:23.652479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:39.317 [2024-09-30 20:02:23.652487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.652934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.652950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.317 [2024-09-30 20:02:23.652959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:18:39.317 [2024-09-30 20:02:23.652971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.653104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.653113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.317 [2024-09-30 20:02:23.653122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:39.317 [2024-09-30 20:02:23.653129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.666384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.666414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.317 [2024-09-30 20:02:23.666424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.235 ms 00:18:39.317 [2024-09-30 20:02:23.666432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.317 [2024-09-30 20:02:23.679175] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:39.317 [2024-09-30 20:02:23.679207] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:39.317 [2024-09-30 20:02:23.679219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.317 [2024-09-30 20:02:23.679227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:39.317 [2024-09-30 20:02:23.679236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.696 ms 00:18:39.317 [2024-09-30 20:02:23.679243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.703836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.703869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:39.576 [2024-09-30 20:02:23.703881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.544 ms 00:18:39.576 [2024-09-30 20:02:23.703888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.715133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.715161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:39.576 [2024-09-30 20:02:23.715171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.210 ms 00:18:39.576 [2024-09-30 20:02:23.715179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.726291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.726443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:39.576 [2024-09-30 20:02:23.726459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.074 ms 00:18:39.576 [2024-09-30 20:02:23.726468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.727080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.727100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:39.576 [2024-09-30 20:02:23.727110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:18:39.576 [2024-09-30 20:02:23.727118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.785887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.786103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:39.576 [2024-09-30 20:02:23.786122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.752 ms 00:18:39.576 [2024-09-30 20:02:23.786131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.797453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:39.576 [2024-09-30 20:02:23.800339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.800480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:39.576 [2024-09-30 20:02:23.800499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.897 ms 00:18:39.576 [2024-09-30 20:02:23.800513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.800621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.800633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:39.576 [2024-09-30 20:02:23.800643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:39.576 [2024-09-30 20:02:23.800650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.800722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.800733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:39.576 [2024-09-30 20:02:23.800741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:39.576 [2024-09-30 20:02:23.800749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.800771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.800780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:39.576 [2024-09-30 20:02:23.800788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:39.576 [2024-09-30 20:02:23.800796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.576 [2024-09-30 20:02:23.800830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:39.576 [2024-09-30 20:02:23.800841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.576 [2024-09-30 20:02:23.800848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:39.576 [2024-09-30 20:02:23.800860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:39.577 [2024-09-30 20:02:23.800868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.577 [2024-09-30 20:02:23.824239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.577 [2024-09-30 20:02:23.824388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:39.577 [2024-09-30 20:02:23.824405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.353 ms 00:18:39.577 [2024-09-30 20:02:23.824414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.577 [2024-09-30 20:02:23.824482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.577 [2024-09-30 20:02:23.824492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:39.577 [2024-09-30 20:02:23.824500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:39.577 [2024-09-30 20:02:23.824508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.577 [2024-09-30 20:02:23.825722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.161 ms, result 0 00:19:01.468  Copying: 45/1024 [MB] (45 MBps) Copying: 93/1024 [MB] (47 MBps) Copying: 146/1024 [MB] (53 MBps) Copying: 194/1024 [MB] (48 MBps) Copying: 242/1024 [MB] (47 MBps) Copying: 289/1024 [MB] (47 MBps) Copying: 336/1024 [MB] (46 MBps) Copying: 382/1024 [MB] (45 MBps) Copying: 436/1024 [MB] (53 MBps) Copying: 484/1024 [MB] (48 MBps) Copying: 533/1024 [MB] (48 MBps) Copying: 579/1024 [MB] (46 MBps) Copying: 623/1024 [MB] (43 MBps) Copying: 670/1024 [MB] (47 MBps) Copying: 711/1024 [MB] (41 MBps) Copying: 765/1024 [MB] (53 MBps) Copying: 817/1024 [MB] (51 MBps) Copying: 862/1024 [MB] (45 MBps) Copying: 903/1024 [MB] (40 MBps) Copying: 948/1024 [MB] (44 MBps) Copying: 998/1024 [MB] (50 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-30 20:02:45.622013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.622082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.468 [2024-09-30 20:02:45.622096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:01.468 [2024-09-30 20:02:45.622107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.622127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.468 [2024-09-30 20:02:45.624390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.624419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.468 [2024-09-30 20:02:45.624428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:19:01.468 [2024-09-30 20:02:45.624435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.624626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.624635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.468 [2024-09-30 20:02:45.624642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:19:01.468 [2024-09-30 20:02:45.624649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.627294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.627407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.468 [2024-09-30 20:02:45.627420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:19:01.468 [2024-09-30 20:02:45.627427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.632826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.632852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.468 [2024-09-30 20:02:45.632859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.380 ms 00:19:01.468 [2024-09-30 20:02:45.632866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.652615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.652644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.468 [2024-09-30 20:02:45.652653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.696 ms 00:19:01.468 [2024-09-30 20:02:45.652659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.664258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.664293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.468 [2024-09-30 20:02:45.664303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.581 ms 00:19:01.468 [2024-09-30 20:02:45.664310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.664404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.664412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.468 [2024-09-30 20:02:45.664419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:01.468 [2024-09-30 20:02:45.664425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.682409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.682433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:01.468 [2024-09-30 20:02:45.682441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.972 ms 00:19:01.468 [2024-09-30 20:02:45.682447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.699988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.700012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:01.468 [2024-09-30 20:02:45.700020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.525 ms 00:19:01.468 [2024-09-30 20:02:45.700026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.717187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.717211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.468 [2024-09-30 20:02:45.717219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.147 ms 00:19:01.468 [2024-09-30 20:02:45.717224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.734281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.468 [2024-09-30 20:02:45.734305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.468 [2024-09-30 20:02:45.734314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.023 ms 00:19:01.468 [2024-09-30 20:02:45.734320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.468 [2024-09-30 20:02:45.734335] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.468 [2024-09-30 20:02:45.734347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.468 [2024-09-30 20:02:45.734386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.469 [2024-09-30 20:02:45.734931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.470 [2024-09-30 20:02:45.734938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.470 [2024-09-30 20:02:45.734943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.470 [2024-09-30 20:02:45.734956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.470 [2024-09-30 20:02:45.734962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:19:01.470 [2024-09-30 20:02:45.734974] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.470 [2024-09-30 20:02:45.734980] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.470 [2024-09-30 20:02:45.734986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.470 [2024-09-30 20:02:45.734993] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.470 [2024-09-30 20:02:45.734998] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.470 [2024-09-30 20:02:45.735007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.470 [2024-09-30 20:02:45.735014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.470 [2024-09-30 20:02:45.735019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.470 [2024-09-30 20:02:45.735023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.470 [2024-09-30 20:02:45.735029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.470 [2024-09-30 20:02:45.735040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.470 [2024-09-30 20:02:45.735047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:19:01.470 [2024-09-30 20:02:45.735053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.745069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.470 [2024-09-30 20:02:45.745189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.470 [2024-09-30 20:02:45.745202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.004 ms 00:19:01.470 [2024-09-30 20:02:45.745213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.745517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.470 [2024-09-30 20:02:45.745529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.470 [2024-09-30 20:02:45.745537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:19:01.470 [2024-09-30 20:02:45.745543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.768939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.470 [2024-09-30 20:02:45.769045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.470 [2024-09-30 20:02:45.769089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.470 [2024-09-30 20:02:45.769112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.769176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.470 [2024-09-30 20:02:45.769192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.470 [2024-09-30 20:02:45.769207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.470 [2024-09-30 20:02:45.769222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.769283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.470 [2024-09-30 20:02:45.769303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.470 [2024-09-30 20:02:45.769324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.470 [2024-09-30 20:02:45.769409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.470 [2024-09-30 20:02:45.769438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.470 [2024-09-30 20:02:45.769455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.470 [2024-09-30 20:02:45.769498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.470 [2024-09-30 20:02:45.769528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.832499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.832673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.728 [2024-09-30 20:02:45.832717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.832739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.884338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.884512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.728 [2024-09-30 20:02:45.884553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.884571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.884653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.884672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.728 [2024-09-30 20:02:45.884687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.884702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.884744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.884763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.728 [2024-09-30 20:02:45.884779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.884829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.884922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.884984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.728 [2024-09-30 20:02:45.885004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.885039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.885080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.885103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.728 [2024-09-30 20:02:45.885118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.885164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.885211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.885229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.728 [2024-09-30 20:02:45.885244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.885258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.885350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.728 [2024-09-30 20:02:45.885371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.728 [2024-09-30 20:02:45.885386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.728 [2024-09-30 20:02:45.885401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.728 [2024-09-30 20:02:45.885517] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 263.486 ms, result 0 00:19:02.295 00:19:02.295 00:19:02.295 20:02:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:04.196 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:04.196 20:02:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:04.196 [2024-09-30 20:02:48.240758] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:04.196 [2024-09-30 20:02:48.241043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75347 ] 00:19:04.196 [2024-09-30 20:02:48.384800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.196 [2024-09-30 20:02:48.559199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.454 [2024-09-30 20:02:48.788250] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.454 [2024-09-30 20:02:48.788502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.713 [2024-09-30 20:02:48.939994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.713 [2024-09-30 20:02:48.940152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.713 [2024-09-30 20:02:48.940205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.713 [2024-09-30 20:02:48.940232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.713 [2024-09-30 20:02:48.940295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.713 [2024-09-30 20:02:48.940307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.713 [2024-09-30 20:02:48.940315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.713 [2024-09-30 20:02:48.940322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.713 [2024-09-30 20:02:48.940339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.713 [2024-09-30 20:02:48.940847] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.713 [2024-09-30 20:02:48.940860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.713 [2024-09-30 20:02:48.940866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.713 [2024-09-30 20:02:48.940874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:19:04.713 [2024-09-30 20:02:48.940880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.713 [2024-09-30 20:02:48.942171] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.713 [2024-09-30 20:02:48.952315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.713 [2024-09-30 20:02:48.952341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.713 [2024-09-30 20:02:48.952351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.145 ms 00:19:04.714 [2024-09-30 20:02:48.952358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.952404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.952413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.714 [2024-09-30 20:02:48.952420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:04.714 [2024-09-30 20:02:48.952426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.958673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.958701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.714 [2024-09-30 20:02:48.958710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:19:04.714 [2024-09-30 20:02:48.958716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.958776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.958784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.714 [2024-09-30 20:02:48.958791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:04.714 [2024-09-30 20:02:48.958797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.958837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.958846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.714 [2024-09-30 20:02:48.958852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.714 [2024-09-30 20:02:48.958859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.958877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.714 [2024-09-30 20:02:48.961951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.962087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.714 [2024-09-30 20:02:48.962101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:19:04.714 [2024-09-30 20:02:48.962108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.962135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.962143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.714 [2024-09-30 20:02:48.962150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:04.714 [2024-09-30 20:02:48.962156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.962176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.714 [2024-09-30 20:02:48.962195] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.714 [2024-09-30 20:02:48.962225] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.714 [2024-09-30 20:02:48.962238] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.714 [2024-09-30 20:02:48.962335] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.714 [2024-09-30 20:02:48.962345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.714 [2024-09-30 20:02:48.962354] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.714 [2024-09-30 20:02:48.962365] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962374] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962380] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:04.714 [2024-09-30 20:02:48.962386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.714 [2024-09-30 20:02:48.962392] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.714 [2024-09-30 20:02:48.962399] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.714 [2024-09-30 20:02:48.962405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.962412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.714 [2024-09-30 20:02:48.962419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:04.714 [2024-09-30 20:02:48.962425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.962489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.714 [2024-09-30 20:02:48.962498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.714 [2024-09-30 20:02:48.962505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:04.714 [2024-09-30 20:02:48.962511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.714 [2024-09-30 20:02:48.962596] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.714 [2024-09-30 20:02:48.962605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.714 [2024-09-30 20:02:48.962612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.714 [2024-09-30 20:02:48.962631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.714 [2024-09-30 20:02:48.962648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.714 [2024-09-30 20:02:48.962660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.714 [2024-09-30 20:02:48.962665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:04.714 [2024-09-30 20:02:48.962671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.714 [2024-09-30 20:02:48.962682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.714 [2024-09-30 20:02:48.962687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:04.714 [2024-09-30 20:02:48.962693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.714 [2024-09-30 20:02:48.962706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.714 [2024-09-30 20:02:48.962724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.714 [2024-09-30 20:02:48.962740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.714 [2024-09-30 20:02:48.962755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.714 [2024-09-30 20:02:48.962771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.714 [2024-09-30 20:02:48.962787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.714 [2024-09-30 20:02:48.962797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.714 [2024-09-30 20:02:48.962802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:04.714 [2024-09-30 20:02:48.962807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.714 [2024-09-30 20:02:48.962812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.714 [2024-09-30 20:02:48.962818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:04.714 [2024-09-30 20:02:48.962823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.714 [2024-09-30 20:02:48.962833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:04.714 [2024-09-30 20:02:48.962838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962843] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.714 [2024-09-30 20:02:48.962850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.714 [2024-09-30 20:02:48.962857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.714 [2024-09-30 20:02:48.962870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.714 [2024-09-30 20:02:48.962876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.714 [2024-09-30 20:02:48.962882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.714 [2024-09-30 20:02:48.962888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.714 [2024-09-30 20:02:48.962893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.714 [2024-09-30 20:02:48.962898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.714 [2024-09-30 20:02:48.962905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.714 [2024-09-30 20:02:48.962913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.714 [2024-09-30 20:02:48.962919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:04.714 [2024-09-30 20:02:48.962925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:04.714 [2024-09-30 20:02:48.962930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:04.715 [2024-09-30 20:02:48.962936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:04.715 [2024-09-30 20:02:48.962942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:04.715 [2024-09-30 20:02:48.962947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:04.715 [2024-09-30 20:02:48.962952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:04.715 [2024-09-30 20:02:48.962959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:04.715 [2024-09-30 20:02:48.962964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:04.715 [2024-09-30 20:02:48.962970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.962975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.962981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.962986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.962991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:04.715 [2024-09-30 20:02:48.962997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.715 [2024-09-30 20:02:48.963003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.963010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.715 [2024-09-30 20:02:48.963016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.715 [2024-09-30 20:02:48.963022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.715 [2024-09-30 20:02:48.963028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.715 [2024-09-30 20:02:48.963033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:48.963039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.715 [2024-09-30 20:02:48.963045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:19:04.715 [2024-09-30 20:02:48.963050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:48.996515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:48.996565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.715 [2024-09-30 20:02:48.996582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.416 ms 00:19:04.715 [2024-09-30 20:02:48.996595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:48.996709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:48.996722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.715 [2024-09-30 20:02:48.996734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:04.715 [2024-09-30 20:02:48.996745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.023507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.023536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.715 [2024-09-30 20:02:49.023547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.692 ms 00:19:04.715 [2024-09-30 20:02:49.023555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.023581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.023588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.715 [2024-09-30 20:02:49.023596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:04.715 [2024-09-30 20:02:49.023602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.024000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.024014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.715 [2024-09-30 20:02:49.024022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:19:04.715 [2024-09-30 20:02:49.024031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.024141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.024149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.715 [2024-09-30 20:02:49.024156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:04.715 [2024-09-30 20:02:49.024162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.035194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.035375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.715 [2024-09-30 20:02:49.035389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.016 ms 00:19:04.715 [2024-09-30 20:02:49.035396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.045876] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:04.715 [2024-09-30 20:02:49.045903] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.715 [2024-09-30 20:02:49.045913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.045920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.715 [2024-09-30 20:02:49.045927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.411 ms 00:19:04.715 [2024-09-30 20:02:49.045933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.064475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.064501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.715 [2024-09-30 20:02:49.064510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:19:04.715 [2024-09-30 20:02:49.064517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.715 [2024-09-30 20:02:49.073306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.715 [2024-09-30 20:02:49.073416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.715 [2024-09-30 20:02:49.073429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.757 ms 00:19:04.715 [2024-09-30 20:02:49.073436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.082132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.082158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.973 [2024-09-30 20:02:49.082166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.664 ms 00:19:04.973 [2024-09-30 20:02:49.082172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.082643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.082664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.973 [2024-09-30 20:02:49.082672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:19:04.973 [2024-09-30 20:02:49.082678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.131292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.131338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.973 [2024-09-30 20:02:49.131350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.599 ms 00:19:04.973 [2024-09-30 20:02:49.131357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.141014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:04.973 [2024-09-30 20:02:49.143723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.143751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.973 [2024-09-30 20:02:49.143761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.323 ms 00:19:04.973 [2024-09-30 20:02:49.143773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.143847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.143856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.973 [2024-09-30 20:02:49.143864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:04.973 [2024-09-30 20:02:49.143871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.143946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.143956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.973 [2024-09-30 20:02:49.143963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:04.973 [2024-09-30 20:02:49.143970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.143988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.143995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.973 [2024-09-30 20:02:49.144002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.973 [2024-09-30 20:02:49.144008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.144035] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.973 [2024-09-30 20:02:49.144043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.144050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.973 [2024-09-30 20:02:49.144059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:04.973 [2024-09-30 20:02:49.144065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.162694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.162794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.973 [2024-09-30 20:02:49.162837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.614 ms 00:19:04.973 [2024-09-30 20:02:49.162855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.162922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.973 [2024-09-30 20:02:49.162941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.973 [2024-09-30 20:02:49.162958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:04.973 [2024-09-30 20:02:49.162974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.973 [2024-09-30 20:02:49.164114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.732 ms, result 0 00:19:28.499  Copying: 46/1024 [MB] (46 MBps) Copying: 91/1024 [MB] (44 MBps) Copying: 139/1024 [MB] (48 MBps) Copying: 184/1024 [MB] (45 MBps) Copying: 229/1024 [MB] (44 MBps) Copying: 272/1024 [MB] (42 MBps) Copying: 316/1024 [MB] (44 MBps) Copying: 359/1024 [MB] (43 MBps) Copying: 402/1024 [MB] (42 MBps) Copying: 446/1024 [MB] (43 MBps) Copying: 491/1024 [MB] (45 MBps) Copying: 535/1024 [MB] (43 MBps) Copying: 582/1024 [MB] (47 MBps) Copying: 627/1024 [MB] (44 MBps) Copying: 671/1024 [MB] (44 MBps) Copying: 715/1024 [MB] (43 MBps) Copying: 758/1024 [MB] (43 MBps) Copying: 803/1024 [MB] (44 MBps) Copying: 851/1024 [MB] (48 MBps) Copying: 896/1024 [MB] (44 MBps) Copying: 939/1024 [MB] (43 MBps) Copying: 982/1024 [MB] (42 MBps) Copying: 1023/1024 [MB] (40 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-09-30 20:03:12.814446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.499 [2024-09-30 20:03:12.814505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.499 [2024-09-30 20:03:12.814519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:28.499 [2024-09-30 20:03:12.814526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.499 [2024-09-30 20:03:12.816204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.499 [2024-09-30 20:03:12.819932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.499 [2024-09-30 20:03:12.820043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.499 [2024-09-30 20:03:12.820058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.573 ms 00:19:28.499 [2024-09-30 20:03:12.820071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.499 [2024-09-30 20:03:12.829547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.499 [2024-09-30 20:03:12.829584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.499 [2024-09-30 20:03:12.829593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:19:28.499 [2024-09-30 20:03:12.829599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.499 [2024-09-30 20:03:12.845476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.499 [2024-09-30 20:03:12.845503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.499 [2024-09-30 20:03:12.845511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.864 ms 00:19:28.499 [2024-09-30 20:03:12.845518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.499 [2024-09-30 20:03:12.850250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.499 [2024-09-30 20:03:12.850285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.499 [2024-09-30 20:03:12.850293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:19:28.500 [2024-09-30 20:03:12.850301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.869657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.869789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.759 [2024-09-30 20:03:12.869803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.324 ms 00:19:28.759 [2024-09-30 20:03:12.869811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.881208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.881337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.759 [2024-09-30 20:03:12.881351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.371 ms 00:19:28.759 [2024-09-30 20:03:12.881358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.930525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.930623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.759 [2024-09-30 20:03:12.930669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.138 ms 00:19:28.759 [2024-09-30 20:03:12.930688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.949021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.949115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.759 [2024-09-30 20:03:12.949156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.309 ms 00:19:28.759 [2024-09-30 20:03:12.949174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.966641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.966736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.759 [2024-09-30 20:03:12.966778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.435 ms 00:19:28.759 [2024-09-30 20:03:12.966796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:12.984127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:12.984218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.759 [2024-09-30 20:03:12.984258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:19:28.759 [2024-09-30 20:03:12.984291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:13.001567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.759 [2024-09-30 20:03:13.001659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.759 [2024-09-30 20:03:13.001700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.223 ms 00:19:28.759 [2024-09-30 20:03:13.001718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.759 [2024-09-30 20:03:13.001748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.759 [2024-09-30 20:03:13.001769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121600 / 261120 wr_cnt: 1 state: open 00:19:28.759 [2024-09-30 20:03:13.001795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.001819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.001841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.001974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.001998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.759 [2024-09-30 20:03:13.002142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.002966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.003985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.760 [2024-09-30 20:03:13.004906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.760 [2024-09-30 20:03:13.004922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:19:28.760 [2024-09-30 20:03:13.004949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121600 00:19:28.761 [2024-09-30 20:03:13.004992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122560 00:19:28.761 [2024-09-30 20:03:13.005009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121600 00:19:28.761 [2024-09-30 20:03:13.005025] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:19:28.761 [2024-09-30 20:03:13.005039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.761 [2024-09-30 20:03:13.005054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.761 [2024-09-30 20:03:13.005070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.761 [2024-09-30 20:03:13.005084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.761 [2024-09-30 20:03:13.005121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.761 [2024-09-30 20:03:13.005139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.761 [2024-09-30 20:03:13.005161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.761 [2024-09-30 20:03:13.005177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.392 ms 00:19:28.761 [2024-09-30 20:03:13.005192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.015013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.761 [2024-09-30 20:03:13.015104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.761 [2024-09-30 20:03:13.015146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.791 ms 00:19:28.761 [2024-09-30 20:03:13.015164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.015490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.761 [2024-09-30 20:03:13.015559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.761 [2024-09-30 20:03:13.015633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:28.761 [2024-09-30 20:03:13.015652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.039029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.761 [2024-09-30 20:03:13.039140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.761 [2024-09-30 20:03:13.039195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.761 [2024-09-30 20:03:13.039215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.039291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.761 [2024-09-30 20:03:13.039333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.761 [2024-09-30 20:03:13.039356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.761 [2024-09-30 20:03:13.039371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.039450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.761 [2024-09-30 20:03:13.039515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.761 [2024-09-30 20:03:13.039552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.761 [2024-09-30 20:03:13.039570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.039593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.761 [2024-09-30 20:03:13.039609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.761 [2024-09-30 20:03:13.039625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.761 [2024-09-30 20:03:13.039742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.761 [2024-09-30 20:03:13.102938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.761 [2024-09-30 20:03:13.103107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.761 [2024-09-30 20:03:13.103151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.761 [2024-09-30 20:03:13.103170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.154469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.154639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.020 [2024-09-30 20:03:13.154681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.154705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.154795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.154807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.020 [2024-09-30 20:03:13.154814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.154820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.154849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.154856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.020 [2024-09-30 20:03:13.154863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.154869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.154949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.154959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.020 [2024-09-30 20:03:13.154965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.154972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.154999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.155007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.020 [2024-09-30 20:03:13.155013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.155020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.155054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.020 [2024-09-30 20:03:13.155063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.020 [2024-09-30 20:03:13.155069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.020 [2024-09-30 20:03:13.155076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.020 [2024-09-30 20:03:13.155115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.021 [2024-09-30 20:03:13.155123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.021 [2024-09-30 20:03:13.155130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.021 [2024-09-30 20:03:13.155137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.021 [2024-09-30 20:03:13.155241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.044 ms, result 0 00:19:31.551 00:19:31.551 00:19:31.551 20:03:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:31.551 [2024-09-30 20:03:15.461149] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:31.551 [2024-09-30 20:03:15.461299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75621 ] 00:19:31.551 [2024-09-30 20:03:15.612425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.551 [2024-09-30 20:03:15.790378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.809 [2024-09-30 20:03:16.020072] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.809 [2024-09-30 20:03:16.020133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.809 [2024-09-30 20:03:16.173590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.068 [2024-09-30 20:03:16.173810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.068 [2024-09-30 20:03:16.173831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:32.068 [2024-09-30 20:03:16.173847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.068 [2024-09-30 20:03:16.173902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.068 [2024-09-30 20:03:16.173913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.068 [2024-09-30 20:03:16.173923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:32.068 [2024-09-30 20:03:16.173932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.068 [2024-09-30 20:03:16.173951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.068 [2024-09-30 20:03:16.174682] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.068 [2024-09-30 20:03:16.174702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.068 [2024-09-30 20:03:16.174712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.068 [2024-09-30 20:03:16.174722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:19:32.068 [2024-09-30 20:03:16.174730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.068 [2024-09-30 20:03:16.176085] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.068 [2024-09-30 20:03:16.188646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.068 [2024-09-30 20:03:16.188791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.068 [2024-09-30 20:03:16.188809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.563 ms 00:19:32.068 [2024-09-30 20:03:16.188818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.068 [2024-09-30 20:03:16.188867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.068 [2024-09-30 20:03:16.188878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.068 [2024-09-30 20:03:16.188886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:32.068 [2024-09-30 20:03:16.188893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.195320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.195446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.069 [2024-09-30 20:03:16.195461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.371 ms 00:19:32.069 [2024-09-30 20:03:16.195469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.195550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.195559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.069 [2024-09-30 20:03:16.195568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:32.069 [2024-09-30 20:03:16.195576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.195620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.195629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.069 [2024-09-30 20:03:16.195639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:32.069 [2024-09-30 20:03:16.195647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.195670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.069 [2024-09-30 20:03:16.199208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.199326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.069 [2024-09-30 20:03:16.199342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:19:32.069 [2024-09-30 20:03:16.199350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.199380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.199389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.069 [2024-09-30 20:03:16.199397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.069 [2024-09-30 20:03:16.199405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.199436] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.069 [2024-09-30 20:03:16.199456] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.069 [2024-09-30 20:03:16.199493] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.069 [2024-09-30 20:03:16.199509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:32.069 [2024-09-30 20:03:16.199616] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.069 [2024-09-30 20:03:16.199628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.069 [2024-09-30 20:03:16.199639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:32.069 [2024-09-30 20:03:16.199653] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.069 [2024-09-30 20:03:16.199663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.069 [2024-09-30 20:03:16.199671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:32.069 [2024-09-30 20:03:16.199679] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.069 [2024-09-30 20:03:16.199686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.069 [2024-09-30 20:03:16.199694] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.069 [2024-09-30 20:03:16.199703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.199711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.069 [2024-09-30 20:03:16.199719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:32.069 [2024-09-30 20:03:16.199726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.199809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.069 [2024-09-30 20:03:16.199821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.069 [2024-09-30 20:03:16.199828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:32.069 [2024-09-30 20:03:16.199835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.069 [2024-09-30 20:03:16.199949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.069 [2024-09-30 20:03:16.199961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.069 [2024-09-30 20:03:16.199969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.069 [2024-09-30 20:03:16.199977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.199986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.069 [2024-09-30 20:03:16.199994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.069 [2024-09-30 20:03:16.200016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.069 [2024-09-30 20:03:16.200030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.069 [2024-09-30 20:03:16.200038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:32.069 [2024-09-30 20:03:16.200045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.069 [2024-09-30 20:03:16.200057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.069 [2024-09-30 20:03:16.200064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:32.069 [2024-09-30 20:03:16.200071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.069 [2024-09-30 20:03:16.200089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.069 [2024-09-30 20:03:16.200109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.069 [2024-09-30 20:03:16.200129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.069 [2024-09-30 20:03:16.200150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.069 [2024-09-30 20:03:16.200170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.069 [2024-09-30 20:03:16.200190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.069 [2024-09-30 20:03:16.200204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.069 [2024-09-30 20:03:16.200210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:32.069 [2024-09-30 20:03:16.200216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.069 [2024-09-30 20:03:16.200223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.069 [2024-09-30 20:03:16.200230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:32.069 [2024-09-30 20:03:16.200237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.069 [2024-09-30 20:03:16.200250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:32.069 [2024-09-30 20:03:16.200257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200264] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.069 [2024-09-30 20:03:16.200284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.069 [2024-09-30 20:03:16.200294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.069 [2024-09-30 20:03:16.200309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.069 [2024-09-30 20:03:16.200318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.069 [2024-09-30 20:03:16.200325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.069 [2024-09-30 20:03:16.200333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.069 [2024-09-30 20:03:16.200340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.069 [2024-09-30 20:03:16.200347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.069 [2024-09-30 20:03:16.200355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.069 [2024-09-30 20:03:16.200365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.069 [2024-09-30 20:03:16.200374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:32.069 [2024-09-30 20:03:16.200381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:32.069 [2024-09-30 20:03:16.200388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:32.069 [2024-09-30 20:03:16.200396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:32.069 [2024-09-30 20:03:16.200403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:32.069 [2024-09-30 20:03:16.200410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:32.069 [2024-09-30 20:03:16.200418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:32.070 [2024-09-30 20:03:16.200425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:32.070 [2024-09-30 20:03:16.200432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:32.070 [2024-09-30 20:03:16.200439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:32.070 [2024-09-30 20:03:16.200477] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.070 [2024-09-30 20:03:16.200486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.070 [2024-09-30 20:03:16.200502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.070 [2024-09-30 20:03:16.200510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.070 [2024-09-30 20:03:16.200517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.070 [2024-09-30 20:03:16.200524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.200532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.070 [2024-09-30 20:03:16.200539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:19:32.070 [2024-09-30 20:03:16.200546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.236546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.236700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.070 [2024-09-30 20:03:16.236769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.954 ms 00:19:32.070 [2024-09-30 20:03:16.236796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.236918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.236945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.070 [2024-09-30 20:03:16.236968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:32.070 [2024-09-30 20:03:16.236990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.269517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.269635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.070 [2024-09-30 20:03:16.269696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.397 ms 00:19:32.070 [2024-09-30 20:03:16.269720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.269765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.269788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.070 [2024-09-30 20:03:16.269807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:32.070 [2024-09-30 20:03:16.269826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.270309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.270394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.070 [2024-09-30 20:03:16.270442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:19:32.070 [2024-09-30 20:03:16.270470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.270622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.270646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.070 [2024-09-30 20:03:16.270698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:32.070 [2024-09-30 20:03:16.270720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.284059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.284157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.070 [2024-09-30 20:03:16.284205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.308 ms 00:19:32.070 [2024-09-30 20:03:16.284228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.297228] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:32.070 [2024-09-30 20:03:16.297357] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.070 [2024-09-30 20:03:16.297419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.297442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.070 [2024-09-30 20:03:16.297462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.054 ms 00:19:32.070 [2024-09-30 20:03:16.297481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.321908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.322045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.070 [2024-09-30 20:03:16.322098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:19:32.070 [2024-09-30 20:03:16.322121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.333298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.333401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.070 [2024-09-30 20:03:16.333450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.132 ms 00:19:32.070 [2024-09-30 20:03:16.333473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.344870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.344967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.070 [2024-09-30 20:03:16.345014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.357 ms 00:19:32.070 [2024-09-30 20:03:16.345036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.345646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.345727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.070 [2024-09-30 20:03:16.345830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:32.070 [2024-09-30 20:03:16.345933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.404297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.404476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.070 [2024-09-30 20:03:16.404527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.324 ms 00:19:32.070 [2024-09-30 20:03:16.404551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.414996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:32.070 [2024-09-30 20:03:16.417840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.417933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.070 [2024-09-30 20:03:16.417985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.242 ms 00:19:32.070 [2024-09-30 20:03:16.418013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.418407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.418513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.070 [2024-09-30 20:03:16.418568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:32.070 [2024-09-30 20:03:16.418594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.420235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.420375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.070 [2024-09-30 20:03:16.420430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:19:32.070 [2024-09-30 20:03:16.420453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.420547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.420596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.070 [2024-09-30 20:03:16.420621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:32.070 [2024-09-30 20:03:16.420641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.070 [2024-09-30 20:03:16.421002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.070 [2024-09-30 20:03:16.421258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.070 [2024-09-30 20:03:16.421454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.070 [2024-09-30 20:03:16.421614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:19:32.070 [2024-09-30 20:03:16.421744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.329 [2024-09-30 20:03:16.452400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.329 [2024-09-30 20:03:16.452525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.329 [2024-09-30 20:03:16.452622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.552 ms 00:19:32.329 [2024-09-30 20:03:16.452646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.329 [2024-09-30 20:03:16.452734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.329 [2024-09-30 20:03:16.452775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.329 [2024-09-30 20:03:16.452819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:32.329 [2024-09-30 20:03:16.452841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.329 [2024-09-30 20:03:16.453894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 279.856 ms, result 0 00:19:54.185  Copying: 40/1024 [MB] (40 MBps) Copying: 86/1024 [MB] (45 MBps) Copying: 140/1024 [MB] (54 MBps) Copying: 186/1024 [MB] (46 MBps) Copying: 236/1024 [MB] (49 MBps) Copying: 285/1024 [MB] (48 MBps) Copying: 332/1024 [MB] (47 MBps) Copying: 381/1024 [MB] (48 MBps) Copying: 430/1024 [MB] (49 MBps) Copying: 477/1024 [MB] (46 MBps) Copying: 522/1024 [MB] (45 MBps) Copying: 571/1024 [MB] (49 MBps) Copying: 624/1024 [MB] (53 MBps) Copying: 674/1024 [MB] (49 MBps) Copying: 721/1024 [MB] (47 MBps) Copying: 767/1024 [MB] (45 MBps) Copying: 816/1024 [MB] (49 MBps) Copying: 864/1024 [MB] (47 MBps) Copying: 913/1024 [MB] (49 MBps) Copying: 958/1024 [MB] (44 MBps) Copying: 1004/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-30 20:03:38.321586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.321653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.185 [2024-09-30 20:03:38.321668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.185 [2024-09-30 20:03:38.321677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.321699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.185 [2024-09-30 20:03:38.324522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.324705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.185 [2024-09-30 20:03:38.324723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:19:54.185 [2024-09-30 20:03:38.324739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.324975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.324986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.185 [2024-09-30 20:03:38.324995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:54.185 [2024-09-30 20:03:38.325003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.329308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.329338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.185 [2024-09-30 20:03:38.329348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:19:54.185 [2024-09-30 20:03:38.329356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.335568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.335696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.185 [2024-09-30 20:03:38.335712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:19:54.185 [2024-09-30 20:03:38.335721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.361655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.361703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.185 [2024-09-30 20:03:38.361714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.870 ms 00:19:54.185 [2024-09-30 20:03:38.361722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.375864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.375993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.185 [2024-09-30 20:03:38.376010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.110 ms 00:19:54.185 [2024-09-30 20:03:38.376018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.433096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.433133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.185 [2024-09-30 20:03:38.433149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.044 ms 00:19:54.185 [2024-09-30 20:03:38.433157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.456480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.456513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.185 [2024-09-30 20:03:38.456524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.308 ms 00:19:54.185 [2024-09-30 20:03:38.456532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.478839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.479005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:54.185 [2024-09-30 20:03:38.479020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.275 ms 00:19:54.185 [2024-09-30 20:03:38.479027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.501070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.501101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.185 [2024-09-30 20:03:38.501112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.015 ms 00:19:54.185 [2024-09-30 20:03:38.501119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.522859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.185 [2024-09-30 20:03:38.522888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.185 [2024-09-30 20:03:38.522899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.686 ms 00:19:54.185 [2024-09-30 20:03:38.522907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.185 [2024-09-30 20:03:38.522936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.185 [2024-09-30 20:03:38.522952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:19:54.185 [2024-09-30 20:03:38.522963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.522972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.522980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.522988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.522996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.185 [2024-09-30 20:03:38.523096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.186 [2024-09-30 20:03:38.523777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.186 [2024-09-30 20:03:38.523786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fe2e57ce-865f-47c0-bf7b-dac67aca0b50 00:19:54.186 [2024-09-30 20:03:38.523799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:19:54.186 [2024-09-30 20:03:38.523810] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10432 00:19:54.186 [2024-09-30 20:03:38.523818] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9472 00:19:54.186 [2024-09-30 20:03:38.523826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1014 00:19:54.186 [2024-09-30 20:03:38.523833] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.186 [2024-09-30 20:03:38.523841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.186 [2024-09-30 20:03:38.523849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.186 [2024-09-30 20:03:38.523856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.186 [2024-09-30 20:03:38.523863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.186 [2024-09-30 20:03:38.523870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.187 [2024-09-30 20:03:38.523882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.187 [2024-09-30 20:03:38.523897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:19:54.187 [2024-09-30 20:03:38.523909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.187 [2024-09-30 20:03:38.536565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.187 [2024-09-30 20:03:38.536681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.187 [2024-09-30 20:03:38.536701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.619 ms 00:19:54.187 [2024-09-30 20:03:38.536710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.187 [2024-09-30 20:03:38.537064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.187 [2024-09-30 20:03:38.537074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.187 [2024-09-30 20:03:38.537085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:19:54.187 [2024-09-30 20:03:38.537092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.566658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.566706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.446 [2024-09-30 20:03:38.566717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.566725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.566780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.566789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.446 [2024-09-30 20:03:38.566797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.566809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.566867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.566877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.446 [2024-09-30 20:03:38.566885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.566894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.566910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.566919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.446 [2024-09-30 20:03:38.566927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.566935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.646789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.647016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.446 [2024-09-30 20:03:38.647034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.647042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.711920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.711977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.446 [2024-09-30 20:03:38.711991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.446 [2024-09-30 20:03:38.712111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.446 [2024-09-30 20:03:38.712175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.446 [2024-09-30 20:03:38.712477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.446 [2024-09-30 20:03:38.712537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.446 [2024-09-30 20:03:38.712603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.446 [2024-09-30 20:03:38.712663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.446 [2024-09-30 20:03:38.712672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.446 [2024-09-30 20:03:38.712680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.446 [2024-09-30 20:03:38.712800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.186 ms, result 0 00:19:55.381 00:19:55.381 00:19:55.381 20:03:39 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:57.936 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.936 Process with pid 74590 is not found 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74590 00:19:57.936 20:03:41 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74590 ']' 00:19:57.936 20:03:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74590 00:19:57.936 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74590) - No such process 00:19:57.936 20:03:41 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74590 is not found' 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:19:57.936 Remove shared memory files 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:57.936 20:03:41 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:19:57.936 ************************************ 00:19:57.936 END TEST ftl_restore 00:19:57.937 ************************************ 00:19:57.937 00:19:57.937 real 2m4.895s 00:19:57.937 user 1m54.926s 00:19:57.937 sys 0m11.747s 00:19:57.937 20:03:41 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.937 20:03:41 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 20:03:41 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:57.937 20:03:41 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:57.937 20:03:41 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.937 20:03:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 ************************************ 00:19:57.937 START TEST ftl_dirty_shutdown 00:19:57.937 ************************************ 00:19:57.937 20:03:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:57.937 * Looking for test storage... 00:19:57.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.937 20:03:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:57.937 20:03:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:19:57.937 20:03:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:57.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.937 --rc genhtml_branch_coverage=1 00:19:57.937 --rc genhtml_function_coverage=1 00:19:57.937 --rc genhtml_legend=1 00:19:57.937 --rc geninfo_all_blocks=1 00:19:57.937 --rc geninfo_unexecuted_blocks=1 00:19:57.937 00:19:57.937 ' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:57.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.937 --rc genhtml_branch_coverage=1 00:19:57.937 --rc genhtml_function_coverage=1 00:19:57.937 --rc genhtml_legend=1 00:19:57.937 --rc geninfo_all_blocks=1 00:19:57.937 --rc geninfo_unexecuted_blocks=1 00:19:57.937 00:19:57.937 ' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:57.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.937 --rc genhtml_branch_coverage=1 00:19:57.937 --rc genhtml_function_coverage=1 00:19:57.937 --rc genhtml_legend=1 00:19:57.937 --rc geninfo_all_blocks=1 00:19:57.937 --rc geninfo_unexecuted_blocks=1 00:19:57.937 00:19:57.937 ' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:57.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.937 --rc genhtml_branch_coverage=1 00:19:57.937 --rc genhtml_function_coverage=1 00:19:57.937 --rc genhtml_legend=1 00:19:57.937 --rc geninfo_all_blocks=1 00:19:57.937 --rc geninfo_unexecuted_blocks=1 00:19:57.937 00:19:57.937 ' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=75966 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75966 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 75966 ']' 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.937 20:03:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:57.937 [2024-09-30 20:03:42.139852] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:57.937 [2024-09-30 20:03:42.140118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75966 ] 00:19:57.937 [2024-09-30 20:03:42.291170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.196 [2024-09-30 20:03:42.501969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:59.132 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:59.391 { 00:19:59.391 "name": "nvme0n1", 00:19:59.391 "aliases": [ 00:19:59.391 "f01ae2ce-d02a-4ad9-b37b-16e429ae7ae5" 00:19:59.391 ], 00:19:59.391 "product_name": "NVMe disk", 00:19:59.391 "block_size": 4096, 00:19:59.391 "num_blocks": 1310720, 00:19:59.391 "uuid": "f01ae2ce-d02a-4ad9-b37b-16e429ae7ae5", 00:19:59.391 "numa_id": -1, 00:19:59.391 "assigned_rate_limits": { 00:19:59.391 "rw_ios_per_sec": 0, 00:19:59.391 "rw_mbytes_per_sec": 0, 00:19:59.391 "r_mbytes_per_sec": 0, 00:19:59.391 "w_mbytes_per_sec": 0 00:19:59.391 }, 00:19:59.391 "claimed": true, 00:19:59.391 "claim_type": "read_many_write_one", 00:19:59.391 "zoned": false, 00:19:59.391 "supported_io_types": { 00:19:59.391 "read": true, 00:19:59.391 "write": true, 00:19:59.391 "unmap": true, 00:19:59.391 "flush": true, 00:19:59.391 "reset": true, 00:19:59.391 "nvme_admin": true, 00:19:59.391 "nvme_io": true, 00:19:59.391 "nvme_io_md": false, 00:19:59.391 "write_zeroes": true, 00:19:59.391 "zcopy": false, 00:19:59.391 "get_zone_info": false, 00:19:59.391 "zone_management": false, 00:19:59.391 "zone_append": false, 00:19:59.391 "compare": true, 00:19:59.391 "compare_and_write": false, 00:19:59.391 "abort": true, 00:19:59.391 "seek_hole": false, 00:19:59.391 "seek_data": false, 00:19:59.391 "copy": true, 00:19:59.391 "nvme_iov_md": false 00:19:59.391 }, 00:19:59.391 "driver_specific": { 00:19:59.391 "nvme": [ 00:19:59.391 { 00:19:59.391 "pci_address": "0000:00:11.0", 00:19:59.391 "trid": { 00:19:59.391 "trtype": "PCIe", 00:19:59.391 "traddr": "0000:00:11.0" 00:19:59.391 }, 00:19:59.391 "ctrlr_data": { 00:19:59.391 "cntlid": 0, 00:19:59.391 "vendor_id": "0x1b36", 00:19:59.391 "model_number": "QEMU NVMe Ctrl", 00:19:59.391 "serial_number": "12341", 00:19:59.391 "firmware_revision": "8.0.0", 00:19:59.391 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:59.391 "oacs": { 00:19:59.391 "security": 0, 00:19:59.391 "format": 1, 00:19:59.391 "firmware": 0, 00:19:59.391 "ns_manage": 1 00:19:59.391 }, 00:19:59.391 "multi_ctrlr": false, 00:19:59.391 "ana_reporting": false 00:19:59.391 }, 00:19:59.391 "vs": { 00:19:59.391 "nvme_version": "1.4" 00:19:59.391 }, 00:19:59.391 "ns_data": { 00:19:59.391 "id": 1, 00:19:59.391 "can_share": false 00:19:59.391 } 00:19:59.391 } 00:19:59.391 ], 00:19:59.391 "mp_policy": "active_passive" 00:19:59.391 } 00:19:59.391 } 00:19:59.391 ]' 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:59.391 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:59.650 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d839b0f1-d379-45d7-9ef8-0b19af87f959 00:19:59.650 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:19:59.650 20:03:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d839b0f1-d379-45d7-9ef8-0b19af87f959 00:19:59.909 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:00.168 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=1285ac50-f123-4ee9-b322-3024d948b416 00:20:00.168 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1285ac50-f123-4ee9-b322-3024d948b416 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:00.426 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:00.427 { 00:20:00.427 "name": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:00.427 "aliases": [ 00:20:00.427 "lvs/nvme0n1p0" 00:20:00.427 ], 00:20:00.427 "product_name": "Logical Volume", 00:20:00.427 "block_size": 4096, 00:20:00.427 "num_blocks": 26476544, 00:20:00.427 "uuid": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:00.427 "assigned_rate_limits": { 00:20:00.427 "rw_ios_per_sec": 0, 00:20:00.427 "rw_mbytes_per_sec": 0, 00:20:00.427 "r_mbytes_per_sec": 0, 00:20:00.427 "w_mbytes_per_sec": 0 00:20:00.427 }, 00:20:00.427 "claimed": false, 00:20:00.427 "zoned": false, 00:20:00.427 "supported_io_types": { 00:20:00.427 "read": true, 00:20:00.427 "write": true, 00:20:00.427 "unmap": true, 00:20:00.427 "flush": false, 00:20:00.427 "reset": true, 00:20:00.427 "nvme_admin": false, 00:20:00.427 "nvme_io": false, 00:20:00.427 "nvme_io_md": false, 00:20:00.427 "write_zeroes": true, 00:20:00.427 "zcopy": false, 00:20:00.427 "get_zone_info": false, 00:20:00.427 "zone_management": false, 00:20:00.427 "zone_append": false, 00:20:00.427 "compare": false, 00:20:00.427 "compare_and_write": false, 00:20:00.427 "abort": false, 00:20:00.427 "seek_hole": true, 00:20:00.427 "seek_data": true, 00:20:00.427 "copy": false, 00:20:00.427 "nvme_iov_md": false 00:20:00.427 }, 00:20:00.427 "driver_specific": { 00:20:00.427 "lvol": { 00:20:00.427 "lvol_store_uuid": "1285ac50-f123-4ee9-b322-3024d948b416", 00:20:00.427 "base_bdev": "nvme0n1", 00:20:00.427 "thin_provision": true, 00:20:00.427 "num_allocated_clusters": 0, 00:20:00.427 "snapshot": false, 00:20:00.427 "clone": false, 00:20:00.427 "esnap_clone": false 00:20:00.427 } 00:20:00.427 } 00:20:00.427 } 00:20:00.427 ]' 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:00.427 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:00.686 20:03:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:00.686 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:00.945 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:00.945 { 00:20:00.945 "name": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:00.945 "aliases": [ 00:20:00.945 "lvs/nvme0n1p0" 00:20:00.945 ], 00:20:00.945 "product_name": "Logical Volume", 00:20:00.945 "block_size": 4096, 00:20:00.945 "num_blocks": 26476544, 00:20:00.945 "uuid": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:00.945 "assigned_rate_limits": { 00:20:00.945 "rw_ios_per_sec": 0, 00:20:00.945 "rw_mbytes_per_sec": 0, 00:20:00.945 "r_mbytes_per_sec": 0, 00:20:00.945 "w_mbytes_per_sec": 0 00:20:00.945 }, 00:20:00.945 "claimed": false, 00:20:00.945 "zoned": false, 00:20:00.945 "supported_io_types": { 00:20:00.945 "read": true, 00:20:00.945 "write": true, 00:20:00.945 "unmap": true, 00:20:00.945 "flush": false, 00:20:00.945 "reset": true, 00:20:00.945 "nvme_admin": false, 00:20:00.945 "nvme_io": false, 00:20:00.945 "nvme_io_md": false, 00:20:00.945 "write_zeroes": true, 00:20:00.945 "zcopy": false, 00:20:00.945 "get_zone_info": false, 00:20:00.945 "zone_management": false, 00:20:00.945 "zone_append": false, 00:20:00.945 "compare": false, 00:20:00.945 "compare_and_write": false, 00:20:00.945 "abort": false, 00:20:00.945 "seek_hole": true, 00:20:00.945 "seek_data": true, 00:20:00.945 "copy": false, 00:20:00.945 "nvme_iov_md": false 00:20:00.945 }, 00:20:00.945 "driver_specific": { 00:20:00.945 "lvol": { 00:20:00.945 "lvol_store_uuid": "1285ac50-f123-4ee9-b322-3024d948b416", 00:20:00.945 "base_bdev": "nvme0n1", 00:20:00.945 "thin_provision": true, 00:20:00.945 "num_allocated_clusters": 0, 00:20:00.945 "snapshot": false, 00:20:00.945 "clone": false, 00:20:00.945 "esnap_clone": false 00:20:00.945 } 00:20:00.945 } 00:20:00.945 } 00:20:00.945 ]' 00:20:00.945 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:00.945 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:00.945 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:01.204 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e7936d2-19a2-4d64-b8b8-930cc10e775e 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:01.463 { 00:20:01.463 "name": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:01.463 "aliases": [ 00:20:01.463 "lvs/nvme0n1p0" 00:20:01.463 ], 00:20:01.463 "product_name": "Logical Volume", 00:20:01.463 "block_size": 4096, 00:20:01.463 "num_blocks": 26476544, 00:20:01.463 "uuid": "2e7936d2-19a2-4d64-b8b8-930cc10e775e", 00:20:01.463 "assigned_rate_limits": { 00:20:01.463 "rw_ios_per_sec": 0, 00:20:01.463 "rw_mbytes_per_sec": 0, 00:20:01.463 "r_mbytes_per_sec": 0, 00:20:01.463 "w_mbytes_per_sec": 0 00:20:01.463 }, 00:20:01.463 "claimed": false, 00:20:01.463 "zoned": false, 00:20:01.463 "supported_io_types": { 00:20:01.463 "read": true, 00:20:01.463 "write": true, 00:20:01.463 "unmap": true, 00:20:01.463 "flush": false, 00:20:01.463 "reset": true, 00:20:01.463 "nvme_admin": false, 00:20:01.463 "nvme_io": false, 00:20:01.463 "nvme_io_md": false, 00:20:01.463 "write_zeroes": true, 00:20:01.463 "zcopy": false, 00:20:01.463 "get_zone_info": false, 00:20:01.463 "zone_management": false, 00:20:01.463 "zone_append": false, 00:20:01.463 "compare": false, 00:20:01.463 "compare_and_write": false, 00:20:01.463 "abort": false, 00:20:01.463 "seek_hole": true, 00:20:01.463 "seek_data": true, 00:20:01.463 "copy": false, 00:20:01.463 "nvme_iov_md": false 00:20:01.463 }, 00:20:01.463 "driver_specific": { 00:20:01.463 "lvol": { 00:20:01.463 "lvol_store_uuid": "1285ac50-f123-4ee9-b322-3024d948b416", 00:20:01.463 "base_bdev": "nvme0n1", 00:20:01.463 "thin_provision": true, 00:20:01.463 "num_allocated_clusters": 0, 00:20:01.463 "snapshot": false, 00:20:01.463 "clone": false, 00:20:01.463 "esnap_clone": false 00:20:01.463 } 00:20:01.463 } 00:20:01.463 } 00:20:01.463 ]' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2e7936d2-19a2-4d64-b8b8-930cc10e775e --l2p_dram_limit 10' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:01.463 20:03:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2e7936d2-19a2-4d64-b8b8-930cc10e775e --l2p_dram_limit 10 -c nvc0n1p0 00:20:01.723 [2024-09-30 20:03:45.966511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.966716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.723 [2024-09-30 20:03:45.966737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.723 [2024-09-30 20:03:45.966745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.966811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.966820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.723 [2024-09-30 20:03:45.966829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:01.723 [2024-09-30 20:03:45.966854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.966878] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.723 [2024-09-30 20:03:45.967525] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.723 [2024-09-30 20:03:45.967545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.967553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.723 [2024-09-30 20:03:45.967562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:20:01.723 [2024-09-30 20:03:45.967570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.967630] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5b0c4c84-b803-4f53-bd37-677f95439c40 00:20:01.723 [2024-09-30 20:03:45.968928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.968963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:01.723 [2024-09-30 20:03:45.968973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:01.723 [2024-09-30 20:03:45.968982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.975804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.975837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.723 [2024-09-30 20:03:45.975845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:20:01.723 [2024-09-30 20:03:45.975853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.975929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.975938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.723 [2024-09-30 20:03:45.975945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:01.723 [2024-09-30 20:03:45.975958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.975999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.976009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.723 [2024-09-30 20:03:45.976016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:01.723 [2024-09-30 20:03:45.976023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.976042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.723 [2024-09-30 20:03:45.979308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.979332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.723 [2024-09-30 20:03:45.979343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:20:01.723 [2024-09-30 20:03:45.979349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.979379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.979385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.723 [2024-09-30 20:03:45.979393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:01.723 [2024-09-30 20:03:45.979402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.979422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:01.723 [2024-09-30 20:03:45.979533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.723 [2024-09-30 20:03:45.979546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.723 [2024-09-30 20:03:45.979554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:01.723 [2024-09-30 20:03:45.979566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.723 [2024-09-30 20:03:45.979573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.723 [2024-09-30 20:03:45.979581] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:01.723 [2024-09-30 20:03:45.979588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.723 [2024-09-30 20:03:45.979596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.723 [2024-09-30 20:03:45.979601] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.723 [2024-09-30 20:03:45.979608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.723 [2024-09-30 20:03:45.979619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.723 [2024-09-30 20:03:45.979627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:20:01.723 [2024-09-30 20:03:45.979632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.723 [2024-09-30 20:03:45.979699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.724 [2024-09-30 20:03:45.979707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.724 [2024-09-30 20:03:45.979715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:01.724 [2024-09-30 20:03:45.979721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.724 [2024-09-30 20:03:45.979797] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.724 [2024-09-30 20:03:45.979804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.724 [2024-09-30 20:03:45.979812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.724 [2024-09-30 20:03:45.979830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.724 [2024-09-30 20:03:45.979849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.724 [2024-09-30 20:03:45.979860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.724 [2024-09-30 20:03:45.979866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:01.724 [2024-09-30 20:03:45.979872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.724 [2024-09-30 20:03:45.979878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.724 [2024-09-30 20:03:45.979884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:01.724 [2024-09-30 20:03:45.979890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.724 [2024-09-30 20:03:45.979903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.724 [2024-09-30 20:03:45.979921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.724 [2024-09-30 20:03:45.979938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.724 [2024-09-30 20:03:45.979956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.724 [2024-09-30 20:03:45.979975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:01.724 [2024-09-30 20:03:45.979982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.724 [2024-09-30 20:03:45.979987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.724 [2024-09-30 20:03:45.979995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:01.724 [2024-09-30 20:03:45.980000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.724 [2024-09-30 20:03:45.980007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.724 [2024-09-30 20:03:45.980012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:01.724 [2024-09-30 20:03:45.980018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.724 [2024-09-30 20:03:45.980023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.724 [2024-09-30 20:03:45.980029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:01.724 [2024-09-30 20:03:45.980035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.980041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.724 [2024-09-30 20:03:45.980046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:01.724 [2024-09-30 20:03:45.980052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.980057] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.724 [2024-09-30 20:03:45.980064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.724 [2024-09-30 20:03:45.980072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.724 [2024-09-30 20:03:45.980080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.724 [2024-09-30 20:03:45.980087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.724 [2024-09-30 20:03:45.980095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.724 [2024-09-30 20:03:45.980100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.724 [2024-09-30 20:03:45.980107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.724 [2024-09-30 20:03:45.980112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.724 [2024-09-30 20:03:45.980118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.724 [2024-09-30 20:03:45.980126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.724 [2024-09-30 20:03:45.980135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:01.724 [2024-09-30 20:03:45.980149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:01.724 [2024-09-30 20:03:45.980154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:01.724 [2024-09-30 20:03:45.980160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:01.724 [2024-09-30 20:03:45.980167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:01.724 [2024-09-30 20:03:45.980174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:01.724 [2024-09-30 20:03:45.980180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:01.724 [2024-09-30 20:03:45.980187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:01.724 [2024-09-30 20:03:45.980192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:01.724 [2024-09-30 20:03:45.980201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:01.724 [2024-09-30 20:03:45.980230] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.724 [2024-09-30 20:03:45.980239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.724 [2024-09-30 20:03:45.980253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.724 [2024-09-30 20:03:45.980258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.724 [2024-09-30 20:03:45.980279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.724 [2024-09-30 20:03:45.980286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.724 [2024-09-30 20:03:45.980294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.724 [2024-09-30 20:03:45.980300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:20:01.724 [2024-09-30 20:03:45.980307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.724 [2024-09-30 20:03:45.980351] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:01.724 [2024-09-30 20:03:45.980362] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:04.255 [2024-09-30 20:03:48.175116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.175417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:04.255 [2024-09-30 20:03:48.175441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2194.755 ms 00:20:04.255 [2024-09-30 20:03:48.175452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.204294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.204343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.255 [2024-09-30 20:03:48.204357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.616 ms 00:20:04.255 [2024-09-30 20:03:48.204367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.204528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.204543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.255 [2024-09-30 20:03:48.204553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:04.255 [2024-09-30 20:03:48.204569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.249293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.249545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.255 [2024-09-30 20:03:48.249571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.665 ms 00:20:04.255 [2024-09-30 20:03:48.249586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.249642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.249654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.255 [2024-09-30 20:03:48.249665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.255 [2024-09-30 20:03:48.249683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.250176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.250199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.255 [2024-09-30 20:03:48.250210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:20:04.255 [2024-09-30 20:03:48.250225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.250380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.250394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.255 [2024-09-30 20:03:48.250404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:04.255 [2024-09-30 20:03:48.250417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.266647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.266686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.255 [2024-09-30 20:03:48.266698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.207 ms 00:20:04.255 [2024-09-30 20:03:48.266710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.278933] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:04.255 [2024-09-30 20:03:48.282171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.282201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:04.255 [2024-09-30 20:03:48.282216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.371 ms 00:20:04.255 [2024-09-30 20:03:48.282225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.346648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.346695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:04.255 [2024-09-30 20:03:48.346716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.391 ms 00:20:04.255 [2024-09-30 20:03:48.346725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.346922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.346933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:04.255 [2024-09-30 20:03:48.346948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:04.255 [2024-09-30 20:03:48.346956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.370200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.370238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:04.255 [2024-09-30 20:03:48.370252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.199 ms 00:20:04.255 [2024-09-30 20:03:48.370260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.392181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.392212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:04.255 [2024-09-30 20:03:48.392226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.867 ms 00:20:04.255 [2024-09-30 20:03:48.392234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.392832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.392854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:04.255 [2024-09-30 20:03:48.392865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:04.255 [2024-09-30 20:03:48.392872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.464884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.464924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:04.255 [2024-09-30 20:03:48.464942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.976 ms 00:20:04.255 [2024-09-30 20:03:48.464953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.489916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.489951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:04.255 [2024-09-30 20:03:48.489966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.887 ms 00:20:04.255 [2024-09-30 20:03:48.489974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.513244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.513289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:04.255 [2024-09-30 20:03:48.513302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.228 ms 00:20:04.255 [2024-09-30 20:03:48.513310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.536739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.536777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:04.255 [2024-09-30 20:03:48.536791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.389 ms 00:20:04.255 [2024-09-30 20:03:48.536799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.536842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.536852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:04.255 [2024-09-30 20:03:48.536869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:04.255 [2024-09-30 20:03:48.536877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.536961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.255 [2024-09-30 20:03:48.536972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:04.255 [2024-09-30 20:03:48.536983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:04.255 [2024-09-30 20:03:48.536990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.255 [2024-09-30 20:03:48.538069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2571.086 ms, result 0 00:20:04.255 { 00:20:04.255 "name": "ftl0", 00:20:04.255 "uuid": "5b0c4c84-b803-4f53-bd37-677f95439c40" 00:20:04.255 } 00:20:04.255 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:04.255 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:04.514 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:04.514 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:04.514 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:04.780 /dev/nbd0 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:04.780 20:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:04.780 1+0 records in 00:20:04.780 1+0 records out 00:20:04.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294414 s, 13.9 MB/s 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:20:04.780 20:03:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:04.780 [2024-09-30 20:03:49.077364] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:04.780 [2024-09-30 20:03:49.077499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76097 ] 00:20:05.039 [2024-09-30 20:03:49.219578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.297 [2024-09-30 20:03:49.430187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.415  Copying: 222/1024 [MB] (222 MBps) Copying: 481/1024 [MB] (258 MBps) Copying: 737/1024 [MB] (256 MBps) Copying: 992/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 248 MBps) 00:20:10.415 00:20:10.415 20:03:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:12.317 20:03:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:12.317 [2024-09-30 20:03:56.497712] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:12.317 [2024-09-30 20:03:56.498208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76173 ] 00:20:12.317 [2024-09-30 20:03:56.640994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.576 [2024-09-30 20:03:56.852600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.567  Copying: 32/1024 [MB] (32 MBps) Copying: 59/1024 [MB] (27 MBps) Copying: 88/1024 [MB] (29 MBps) Copying: 117/1024 [MB] (28 MBps) Copying: 146/1024 [MB] (28 MBps) Copying: 175/1024 [MB] (28 MBps) Copying: 204/1024 [MB] (29 MBps) Copying: 233/1024 [MB] (29 MBps) Copying: 263/1024 [MB] (29 MBps) Copying: 293/1024 [MB] (30 MBps) Copying: 322/1024 [MB] (29 MBps) Copying: 352/1024 [MB] (30 MBps) Copying: 382/1024 [MB] (29 MBps) Copying: 411/1024 [MB] (29 MBps) Copying: 439/1024 [MB] (28 MBps) Copying: 469/1024 [MB] (29 MBps) Copying: 498/1024 [MB] (29 MBps) Copying: 531/1024 [MB] (32 MBps) Copying: 566/1024 [MB] (34 MBps) Copying: 595/1024 [MB] (29 MBps) Copying: 627/1024 [MB] (31 MBps) Copying: 655/1024 [MB] (28 MBps) Copying: 684/1024 [MB] (29 MBps) Copying: 719/1024 [MB] (34 MBps) Copying: 748/1024 [MB] (29 MBps) Copying: 778/1024 [MB] (30 MBps) Copying: 807/1024 [MB] (29 MBps) Copying: 837/1024 [MB] (29 MBps) Copying: 867/1024 [MB] (30 MBps) Copying: 898/1024 [MB] (30 MBps) Copying: 927/1024 [MB] (29 MBps) Copying: 956/1024 [MB] (29 MBps) Copying: 990/1024 [MB] (33 MBps) Copying: 1020/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps) 00:20:47.567 00:20:47.567 20:04:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:47.567 20:04:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:47.825 20:04:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:48.084 [2024-09-30 20:04:32.303417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.303485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.084 [2024-09-30 20:04:32.303499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.084 [2024-09-30 20:04:32.303509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.303529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.084 [2024-09-30 20:04:32.305788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.305819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.084 [2024-09-30 20:04:32.305833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.241 ms 00:20:48.084 [2024-09-30 20:04:32.305840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.307707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.307739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.084 [2024-09-30 20:04:32.307748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:20:48.084 [2024-09-30 20:04:32.307755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.321810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.321839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.084 [2024-09-30 20:04:32.321850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.036 ms 00:20:48.084 [2024-09-30 20:04:32.321857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.326565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.326589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.084 [2024-09-30 20:04:32.326602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:20:48.084 [2024-09-30 20:04:32.326609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.345766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.345795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.084 [2024-09-30 20:04:32.345806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.054 ms 00:20:48.084 [2024-09-30 20:04:32.345813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.358588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.358621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.084 [2024-09-30 20:04:32.358633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.739 ms 00:20:48.084 [2024-09-30 20:04:32.358640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.358753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.358762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.084 [2024-09-30 20:04:32.358776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:48.084 [2024-09-30 20:04:32.358782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.376780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.376813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:48.084 [2024-09-30 20:04:32.376824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.982 ms 00:20:48.084 [2024-09-30 20:04:32.376831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.394235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.394264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:48.084 [2024-09-30 20:04:32.394283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.370 ms 00:20:48.084 [2024-09-30 20:04:32.394289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.411275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.411302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.084 [2024-09-30 20:04:32.411311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.937 ms 00:20:48.084 [2024-09-30 20:04:32.411317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.428381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.084 [2024-09-30 20:04:32.428408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.084 [2024-09-30 20:04:32.428418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.001 ms 00:20:48.084 [2024-09-30 20:04:32.428424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.084 [2024-09-30 20:04:32.428454] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.084 [2024-09-30 20:04:32.428467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.084 [2024-09-30 20:04:32.428723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.428994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.085 [2024-09-30 20:04:32.429160] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.085 [2024-09-30 20:04:32.429170] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b0c4c84-b803-4f53-bd37-677f95439c40 00:20:48.085 [2024-09-30 20:04:32.429183] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.085 [2024-09-30 20:04:32.429192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.085 [2024-09-30 20:04:32.429198] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.085 [2024-09-30 20:04:32.429206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.085 [2024-09-30 20:04:32.429212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.085 [2024-09-30 20:04:32.429219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.085 [2024-09-30 20:04:32.429225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.085 [2024-09-30 20:04:32.429231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.085 [2024-09-30 20:04:32.429236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.085 [2024-09-30 20:04:32.429242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.085 [2024-09-30 20:04:32.429248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.085 [2024-09-30 20:04:32.429256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:20:48.085 [2024-09-30 20:04:32.429263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.085 [2024-09-30 20:04:32.439645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.085 [2024-09-30 20:04:32.439673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.085 [2024-09-30 20:04:32.439684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.307 ms 00:20:48.085 [2024-09-30 20:04:32.439691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.085 [2024-09-30 20:04:32.439983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.085 [2024-09-30 20:04:32.439991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.085 [2024-09-30 20:04:32.439999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:48.085 [2024-09-30 20:04:32.440007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.470999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.471048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.344 [2024-09-30 20:04:32.471061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.471068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.471139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.471146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.344 [2024-09-30 20:04:32.471154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.471162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.471240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.471250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.344 [2024-09-30 20:04:32.471258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.471265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.471295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.471301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.344 [2024-09-30 20:04:32.471310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.471315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.533125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.533181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.344 [2024-09-30 20:04:32.533193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.533200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.583635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.583691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.344 [2024-09-30 20:04:32.583703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.583713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.583819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.583827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.344 [2024-09-30 20:04:32.583836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.583842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.583883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.583891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.344 [2024-09-30 20:04:32.583899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.583906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.583991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.583999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.344 [2024-09-30 20:04:32.584008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.584014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.584043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.584050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.344 [2024-09-30 20:04:32.584058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.584064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.584104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.584112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.344 [2024-09-30 20:04:32.584120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.584126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.584169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.344 [2024-09-30 20:04:32.584177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.344 [2024-09-30 20:04:32.584187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.344 [2024-09-30 20:04:32.584194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.344 [2024-09-30 20:04:32.584332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.870 ms, result 0 00:20:48.344 true 00:20:48.344 20:04:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 75966 00:20:48.344 20:04:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75966 00:20:48.344 20:04:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:20:48.344 [2024-09-30 20:04:32.681710] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:48.344 [2024-09-30 20:04:32.681843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76560 ] 00:20:48.602 [2024-09-30 20:04:32.828890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.860 [2024-09-30 20:04:33.011046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.603  Copying: 255/1024 [MB] (255 MBps) Copying: 514/1024 [MB] (259 MBps) Copying: 771/1024 [MB] (256 MBps) Copying: 1022/1024 [MB] (251 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:20:53.603 00:20:53.603 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75966 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:20:53.603 20:04:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.862 [2024-09-30 20:04:37.973096] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:53.862 [2024-09-30 20:04:37.973219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76621 ] 00:20:53.862 [2024-09-30 20:04:38.121355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.120 [2024-09-30 20:04:38.299025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.378 [2024-09-30 20:04:38.530502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.378 [2024-09-30 20:04:38.530558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.378 [2024-09-30 20:04:38.593864] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:54.378 [2024-09-30 20:04:38.594292] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:54.378 [2024-09-30 20:04:38.594570] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:54.638 [2024-09-30 20:04:38.769452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.769506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.638 [2024-09-30 20:04:38.769519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.638 [2024-09-30 20:04:38.769526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.769567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.769576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.638 [2024-09-30 20:04:38.769583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:54.638 [2024-09-30 20:04:38.769591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.769605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.638 [2024-09-30 20:04:38.770118] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.638 [2024-09-30 20:04:38.770138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.770145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.638 [2024-09-30 20:04:38.770153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:54.638 [2024-09-30 20:04:38.770159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.771483] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.638 [2024-09-30 20:04:38.781582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.781629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.638 [2024-09-30 20:04:38.781640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.100 ms 00:20:54.638 [2024-09-30 20:04:38.781647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.781696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.781707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.638 [2024-09-30 20:04:38.781714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:54.638 [2024-09-30 20:04:38.781720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.788027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.788059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.638 [2024-09-30 20:04:38.788068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:20:54.638 [2024-09-30 20:04:38.788075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.788139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.788147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.638 [2024-09-30 20:04:38.788153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:54.638 [2024-09-30 20:04:38.788160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.788209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.788217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.638 [2024-09-30 20:04:38.788224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:54.638 [2024-09-30 20:04:38.788230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.788250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.638 [2024-09-30 20:04:38.791216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.791243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.638 [2024-09-30 20:04:38.791251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:20:54.638 [2024-09-30 20:04:38.791257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.791301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.791309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.638 [2024-09-30 20:04:38.791316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.638 [2024-09-30 20:04:38.791322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.791339] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.638 [2024-09-30 20:04:38.791356] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.638 [2024-09-30 20:04:38.791385] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.638 [2024-09-30 20:04:38.791400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.638 [2024-09-30 20:04:38.791485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.638 [2024-09-30 20:04:38.791494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.638 [2024-09-30 20:04:38.791504] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.638 [2024-09-30 20:04:38.791512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.638 [2024-09-30 20:04:38.791533] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.638 [2024-09-30 20:04:38.791540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.638 [2024-09-30 20:04:38.791546] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.638 [2024-09-30 20:04:38.791553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.791562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.638 [2024-09-30 20:04:38.791568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:54.638 [2024-09-30 20:04:38.791574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.791637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.638 [2024-09-30 20:04:38.791644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.638 [2024-09-30 20:04:38.791650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:54.638 [2024-09-30 20:04:38.791656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.638 [2024-09-30 20:04:38.791733] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.638 [2024-09-30 20:04:38.791747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.638 [2024-09-30 20:04:38.791757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.638 [2024-09-30 20:04:38.791776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.638 [2024-09-30 20:04:38.791793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.638 [2024-09-30 20:04:38.791810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.638 [2024-09-30 20:04:38.791815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.638 [2024-09-30 20:04:38.791820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.638 [2024-09-30 20:04:38.791825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.638 [2024-09-30 20:04:38.791831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.638 [2024-09-30 20:04:38.791838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.638 [2024-09-30 20:04:38.791849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.638 [2024-09-30 20:04:38.791865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.638 [2024-09-30 20:04:38.791881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.638 [2024-09-30 20:04:38.791898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.638 [2024-09-30 20:04:38.791914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.638 [2024-09-30 20:04:38.791925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.638 [2024-09-30 20:04:38.791930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.638 [2024-09-30 20:04:38.791935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.638 [2024-09-30 20:04:38.791941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.638 [2024-09-30 20:04:38.791946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.638 [2024-09-30 20:04:38.791952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.638 [2024-09-30 20:04:38.791957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.639 [2024-09-30 20:04:38.791963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.639 [2024-09-30 20:04:38.791968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.639 [2024-09-30 20:04:38.791973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.639 [2024-09-30 20:04:38.791979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.639 [2024-09-30 20:04:38.791984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.639 [2024-09-30 20:04:38.791989] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.639 [2024-09-30 20:04:38.791995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.639 [2024-09-30 20:04:38.792001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.639 [2024-09-30 20:04:38.792007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.639 [2024-09-30 20:04:38.792016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.639 [2024-09-30 20:04:38.792022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.639 [2024-09-30 20:04:38.792027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.639 [2024-09-30 20:04:38.792033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.639 [2024-09-30 20:04:38.792038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.639 [2024-09-30 20:04:38.792044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.639 [2024-09-30 20:04:38.792050] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.639 [2024-09-30 20:04:38.792058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.639 [2024-09-30 20:04:38.792071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.639 [2024-09-30 20:04:38.792077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.639 [2024-09-30 20:04:38.792082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.639 [2024-09-30 20:04:38.792088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.639 [2024-09-30 20:04:38.792094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.639 [2024-09-30 20:04:38.792099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.639 [2024-09-30 20:04:38.792105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.639 [2024-09-30 20:04:38.792110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.639 [2024-09-30 20:04:38.792115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.639 [2024-09-30 20:04:38.792142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.639 [2024-09-30 20:04:38.792149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.639 [2024-09-30 20:04:38.792164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.639 [2024-09-30 20:04:38.792169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.639 [2024-09-30 20:04:38.792175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.639 [2024-09-30 20:04:38.792181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.792187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.639 [2024-09-30 20:04:38.792193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:20:54.639 [2024-09-30 20:04:38.792199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.834188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.834234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.639 [2024-09-30 20:04:38.834246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.940 ms 00:20:54.639 [2024-09-30 20:04:38.834254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.834354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.834363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.639 [2024-09-30 20:04:38.834370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:54.639 [2024-09-30 20:04:38.834377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.860864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.860899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.639 [2024-09-30 20:04:38.860909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.424 ms 00:20:54.639 [2024-09-30 20:04:38.860916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.860951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.860959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.639 [2024-09-30 20:04:38.860966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.639 [2024-09-30 20:04:38.860972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.861405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.861425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.639 [2024-09-30 20:04:38.861433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:20:54.639 [2024-09-30 20:04:38.861440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.861554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.861563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.639 [2024-09-30 20:04:38.861570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:54.639 [2024-09-30 20:04:38.861576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.872685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.872713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.639 [2024-09-30 20:04:38.872722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.070 ms 00:20:54.639 [2024-09-30 20:04:38.872728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.882947] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:54.639 [2024-09-30 20:04:38.882979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.639 [2024-09-30 20:04:38.882988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.882996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.639 [2024-09-30 20:04:38.883003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.173 ms 00:20:54.639 [2024-09-30 20:04:38.883010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.915540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.915581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.639 [2024-09-30 20:04:38.915599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.495 ms 00:20:54.639 [2024-09-30 20:04:38.915607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.927218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.927252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.639 [2024-09-30 20:04:38.927264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.577 ms 00:20:54.639 [2024-09-30 20:04:38.927279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.938645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.938675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.639 [2024-09-30 20:04:38.938685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.330 ms 00:20:54.639 [2024-09-30 20:04:38.938693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.939313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.939339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.639 [2024-09-30 20:04:38.939349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:54.639 [2024-09-30 20:04:38.939357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.639 [2024-09-30 20:04:38.998722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.639 [2024-09-30 20:04:38.998780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.639 [2024-09-30 20:04:38.998794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.344 ms 00:20:54.639 [2024-09-30 20:04:38.998803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.009559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:54.897 [2024-09-30 20:04:39.012639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.012673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.897 [2024-09-30 20:04:39.012686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:20:54.897 [2024-09-30 20:04:39.012695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.012805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.012817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.897 [2024-09-30 20:04:39.012826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:54.897 [2024-09-30 20:04:39.012836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.012908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.012920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.897 [2024-09-30 20:04:39.012929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:54.897 [2024-09-30 20:04:39.012937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.012957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.012965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.897 [2024-09-30 20:04:39.012974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.897 [2024-09-30 20:04:39.012982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.013019] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.897 [2024-09-30 20:04:39.013029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.013039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.897 [2024-09-30 20:04:39.013047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:54.897 [2024-09-30 20:04:39.013055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.036616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.036653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.897 [2024-09-30 20:04:39.036664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.544 ms 00:20:54.897 [2024-09-30 20:04:39.036675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.036754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.897 [2024-09-30 20:04:39.036764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.897 [2024-09-30 20:04:39.036773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:54.897 [2024-09-30 20:04:39.036781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.897 [2024-09-30 20:04:39.037964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.049 ms, result 0 00:21:18.294  Copying: 45/1024 [MB] (45 MBps) Copying: 88/1024 [MB] (43 MBps) Copying: 135/1024 [MB] (46 MBps) Copying: 181/1024 [MB] (46 MBps) Copying: 225/1024 [MB] (44 MBps) Copying: 277/1024 [MB] (51 MBps) Copying: 321/1024 [MB] (44 MBps) Copying: 368/1024 [MB] (46 MBps) Copying: 412/1024 [MB] (43 MBps) Copying: 456/1024 [MB] (43 MBps) Copying: 506/1024 [MB] (50 MBps) Copying: 556/1024 [MB] (49 MBps) Copying: 601/1024 [MB] (45 MBps) Copying: 650/1024 [MB] (48 MBps) Copying: 700/1024 [MB] (49 MBps) Copying: 742/1024 [MB] (42 MBps) Copying: 788/1024 [MB] (45 MBps) Copying: 832/1024 [MB] (43 MBps) Copying: 875/1024 [MB] (43 MBps) Copying: 924/1024 [MB] (48 MBps) Copying: 967/1024 [MB] (43 MBps) Copying: 1012/1024 [MB] (44 MBps) Copying: 1023/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-09-30 20:05:02.399090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.399144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.294 [2024-09-30 20:05:02.399160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.294 [2024-09-30 20:05:02.399169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.400115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.294 [2024-09-30 20:05:02.404850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.404882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.294 [2024-09-30 20:05:02.404893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:21:18.294 [2024-09-30 20:05:02.404902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.416856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.416889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.294 [2024-09-30 20:05:02.416899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.827 ms 00:21:18.294 [2024-09-30 20:05:02.416908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.435136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.435169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.294 [2024-09-30 20:05:02.435180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.213 ms 00:21:18.294 [2024-09-30 20:05:02.435189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.441345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.441370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:18.294 [2024-09-30 20:05:02.441381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:21:18.294 [2024-09-30 20:05:02.441390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.465599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.465634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.294 [2024-09-30 20:05:02.465646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.172 ms 00:21:18.294 [2024-09-30 20:05:02.465654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.479707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.479738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.294 [2024-09-30 20:05:02.479753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.021 ms 00:21:18.294 [2024-09-30 20:05:02.479762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.294 [2024-09-30 20:05:02.536704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.294 [2024-09-30 20:05:02.536739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.294 [2024-09-30 20:05:02.536750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.908 ms 00:21:18.294 [2024-09-30 20:05:02.536759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.295 [2024-09-30 20:05:02.559385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.295 [2024-09-30 20:05:02.559414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:18.295 [2024-09-30 20:05:02.559424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.612 ms 00:21:18.295 [2024-09-30 20:05:02.559432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.295 [2024-09-30 20:05:02.582084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.295 [2024-09-30 20:05:02.582122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:18.295 [2024-09-30 20:05:02.582131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.622 ms 00:21:18.295 [2024-09-30 20:05:02.582138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.295 [2024-09-30 20:05:02.604170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.295 [2024-09-30 20:05:02.604200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.295 [2024-09-30 20:05:02.604210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.002 ms 00:21:18.295 [2024-09-30 20:05:02.604217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.295 [2024-09-30 20:05:02.626515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.295 [2024-09-30 20:05:02.626543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.295 [2024-09-30 20:05:02.626553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.233 ms 00:21:18.295 [2024-09-30 20:05:02.626561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.295 [2024-09-30 20:05:02.626589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.295 [2024-09-30 20:05:02.626603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:21:18.295 [2024-09-30 20:05:02.626613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.626964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.295 [2024-09-30 20:05:02.627211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.296 [2024-09-30 20:05:02.627566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.296 [2024-09-30 20:05:02.627574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b0c4c84-b803-4f53-bd37-677f95439c40 00:21:18.296 [2024-09-30 20:05:02.627582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:21:18.296 [2024-09-30 20:05:02.627589] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129984 00:21:18.296 [2024-09-30 20:05:02.627596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:21:18.296 [2024-09-30 20:05:02.627604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:21:18.296 [2024-09-30 20:05:02.627611] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.296 [2024-09-30 20:05:02.627619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.296 [2024-09-30 20:05:02.627632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.296 [2024-09-30 20:05:02.627638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.296 [2024-09-30 20:05:02.627645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.296 [2024-09-30 20:05:02.627652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.296 [2024-09-30 20:05:02.627662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.296 [2024-09-30 20:05:02.627670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:21:18.296 [2024-09-30 20:05:02.627678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.296 [2024-09-30 20:05:02.640549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.296 [2024-09-30 20:05:02.640580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.296 [2024-09-30 20:05:02.640590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.856 ms 00:21:18.296 [2024-09-30 20:05:02.640598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.296 [2024-09-30 20:05:02.640966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.296 [2024-09-30 20:05:02.640981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.296 [2024-09-30 20:05:02.640990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:21:18.296 [2024-09-30 20:05:02.640998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.670452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.670486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.555 [2024-09-30 20:05:02.670496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.670508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.670565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.670579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.555 [2024-09-30 20:05:02.670587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.670594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.670643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.670653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.555 [2024-09-30 20:05:02.670661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.670668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.670686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.670694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.555 [2024-09-30 20:05:02.670702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.670709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.734118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.734160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.555 [2024-09-30 20:05:02.734170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.734177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.785507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.785553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.555 [2024-09-30 20:05:02.785563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.785570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.785640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.785649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.555 [2024-09-30 20:05:02.785656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.555 [2024-09-30 20:05:02.785662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.555 [2024-09-30 20:05:02.785690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.555 [2024-09-30 20:05:02.785702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.555 [2024-09-30 20:05:02.785709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.556 [2024-09-30 20:05:02.785715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.556 [2024-09-30 20:05:02.785791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.556 [2024-09-30 20:05:02.785800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.556 [2024-09-30 20:05:02.785806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.556 [2024-09-30 20:05:02.785812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.556 [2024-09-30 20:05:02.785835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.556 [2024-09-30 20:05:02.785843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.556 [2024-09-30 20:05:02.785852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.556 [2024-09-30 20:05:02.785858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.556 [2024-09-30 20:05:02.785889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.556 [2024-09-30 20:05:02.785896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.556 [2024-09-30 20:05:02.785903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.556 [2024-09-30 20:05:02.785909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.556 [2024-09-30 20:05:02.785945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.556 [2024-09-30 20:05:02.785955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.556 [2024-09-30 20:05:02.785962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.556 [2024-09-30 20:05:02.785968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.556 [2024-09-30 20:05:02.786070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.765 ms, result 0 00:21:21.085 00:21:21.085 00:21:21.085 20:05:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:22.985 20:05:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:22.985 [2024-09-30 20:05:07.339706] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:22.985 [2024-09-30 20:05:07.339817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76912 ] 00:21:23.244 [2024-09-30 20:05:07.484990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.503 [2024-09-30 20:05:07.661112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.763 [2024-09-30 20:05:07.890766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.763 [2024-09-30 20:05:07.890817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.763 [2024-09-30 20:05:08.040022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.040062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.763 [2024-09-30 20:05:08.040074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.763 [2024-09-30 20:05:08.040084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.040121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.040130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.763 [2024-09-30 20:05:08.040136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:23.763 [2024-09-30 20:05:08.040142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.040155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.763 [2024-09-30 20:05:08.040715] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.763 [2024-09-30 20:05:08.040729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.040735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.763 [2024-09-30 20:05:08.040742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:21:23.763 [2024-09-30 20:05:08.040748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.041980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.763 [2024-09-30 20:05:08.052202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.052229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.763 [2024-09-30 20:05:08.052239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.223 ms 00:21:23.763 [2024-09-30 20:05:08.052245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.052426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.052444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.763 [2024-09-30 20:05:08.052451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:23.763 [2024-09-30 20:05:08.052458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.058604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.058628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.763 [2024-09-30 20:05:08.058636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:21:23.763 [2024-09-30 20:05:08.058642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.058705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.058713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.763 [2024-09-30 20:05:08.058719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:23.763 [2024-09-30 20:05:08.058726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.058776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.058784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.763 [2024-09-30 20:05:08.058791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:23.763 [2024-09-30 20:05:08.058797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.058815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.763 [2024-09-30 20:05:08.061753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.061776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.763 [2024-09-30 20:05:08.061783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:21:23.763 [2024-09-30 20:05:08.061790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.061816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.061825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.763 [2024-09-30 20:05:08.061831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.763 [2024-09-30 20:05:08.061838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.061858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.763 [2024-09-30 20:05:08.061876] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.763 [2024-09-30 20:05:08.061906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.763 [2024-09-30 20:05:08.061918] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:23.763 [2024-09-30 20:05:08.062003] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.763 [2024-09-30 20:05:08.062013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.763 [2024-09-30 20:05:08.062022] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:23.763 [2024-09-30 20:05:08.062031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.763 [2024-09-30 20:05:08.062038] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.763 [2024-09-30 20:05:08.062045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.763 [2024-09-30 20:05:08.062051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.763 [2024-09-30 20:05:08.062056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.763 [2024-09-30 20:05:08.062063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.763 [2024-09-30 20:05:08.062070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.062076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.763 [2024-09-30 20:05:08.062082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:21:23.763 [2024-09-30 20:05:08.062088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.062153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.763 [2024-09-30 20:05:08.062162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.763 [2024-09-30 20:05:08.062168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:23.763 [2024-09-30 20:05:08.062174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.763 [2024-09-30 20:05:08.062259] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.763 [2024-09-30 20:05:08.062278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.763 [2024-09-30 20:05:08.062286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.763 [2024-09-30 20:05:08.062292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.764 [2024-09-30 20:05:08.062304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.764 [2024-09-30 20:05:08.062320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.764 [2024-09-30 20:05:08.062331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.764 [2024-09-30 20:05:08.062336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.764 [2024-09-30 20:05:08.062342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.764 [2024-09-30 20:05:08.062355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.764 [2024-09-30 20:05:08.062361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.764 [2024-09-30 20:05:08.062366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.764 [2024-09-30 20:05:08.062377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.764 [2024-09-30 20:05:08.062393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.764 [2024-09-30 20:05:08.062408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.764 [2024-09-30 20:05:08.062424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.764 [2024-09-30 20:05:08.062439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.764 [2024-09-30 20:05:08.062455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.764 [2024-09-30 20:05:08.062465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.764 [2024-09-30 20:05:08.062470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.764 [2024-09-30 20:05:08.062475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.764 [2024-09-30 20:05:08.062480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.764 [2024-09-30 20:05:08.062485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.764 [2024-09-30 20:05:08.062490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.764 [2024-09-30 20:05:08.062501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.764 [2024-09-30 20:05:08.062507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062512] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.764 [2024-09-30 20:05:08.062518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.764 [2024-09-30 20:05:08.062526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.764 [2024-09-30 20:05:08.062539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.764 [2024-09-30 20:05:08.062545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.764 [2024-09-30 20:05:08.062550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.764 [2024-09-30 20:05:08.062556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.764 [2024-09-30 20:05:08.062561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.764 [2024-09-30 20:05:08.062566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.764 [2024-09-30 20:05:08.062573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.764 [2024-09-30 20:05:08.062580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.764 [2024-09-30 20:05:08.062592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.764 [2024-09-30 20:05:08.062598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.764 [2024-09-30 20:05:08.062604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.764 [2024-09-30 20:05:08.062609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.764 [2024-09-30 20:05:08.062615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.764 [2024-09-30 20:05:08.062620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.764 [2024-09-30 20:05:08.062626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.764 [2024-09-30 20:05:08.062632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.764 [2024-09-30 20:05:08.062637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.764 [2024-09-30 20:05:08.062667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.764 [2024-09-30 20:05:08.062674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.764 [2024-09-30 20:05:08.062687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.764 [2024-09-30 20:05:08.062693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.764 [2024-09-30 20:05:08.062699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.764 [2024-09-30 20:05:08.062705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.062710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.764 [2024-09-30 20:05:08.062717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:21:23.764 [2024-09-30 20:05:08.062723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.098142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.098192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.764 [2024-09-30 20:05:08.098208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.368 ms 00:21:23.764 [2024-09-30 20:05:08.098220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.098374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.098389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.764 [2024-09-30 20:05:08.098400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:23.764 [2024-09-30 20:05:08.098410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.124903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.124931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.764 [2024-09-30 20:05:08.124942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.420 ms 00:21:23.764 [2024-09-30 20:05:08.124949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.124983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.124991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.764 [2024-09-30 20:05:08.124998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:23.764 [2024-09-30 20:05:08.125005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.125423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.125436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.764 [2024-09-30 20:05:08.125445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:21:23.764 [2024-09-30 20:05:08.125455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.764 [2024-09-30 20:05:08.125566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.764 [2024-09-30 20:05:08.125574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.764 [2024-09-30 20:05:08.125581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:23.764 [2024-09-30 20:05:08.125588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.023 [2024-09-30 20:05:08.136560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.023 [2024-09-30 20:05:08.136582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.023 [2024-09-30 20:05:08.136590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.956 ms 00:21:24.023 [2024-09-30 20:05:08.136597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.023 [2024-09-30 20:05:08.146798] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:24.024 [2024-09-30 20:05:08.146826] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.024 [2024-09-30 20:05:08.146836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.146843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.024 [2024-09-30 20:05:08.146851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.138 ms 00:21:24.024 [2024-09-30 20:05:08.146858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.175302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.175337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.024 [2024-09-30 20:05:08.175350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.406 ms 00:21:24.024 [2024-09-30 20:05:08.175357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.184619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.184645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.024 [2024-09-30 20:05:08.184653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.204 ms 00:21:24.024 [2024-09-30 20:05:08.184660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.193143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.193167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.024 [2024-09-30 20:05:08.193175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.456 ms 00:21:24.024 [2024-09-30 20:05:08.193182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.193681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.193700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.024 [2024-09-30 20:05:08.193707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:21:24.024 [2024-09-30 20:05:08.193714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.241683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.241739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.024 [2024-09-30 20:05:08.241751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.953 ms 00:21:24.024 [2024-09-30 20:05:08.241758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.250333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.024 [2024-09-30 20:05:08.253014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.253040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.024 [2024-09-30 20:05:08.253052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.203 ms 00:21:24.024 [2024-09-30 20:05:08.253064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.253172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.253182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.024 [2024-09-30 20:05:08.253190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:24.024 [2024-09-30 20:05:08.253198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.254697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.254721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.024 [2024-09-30 20:05:08.254730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:21:24.024 [2024-09-30 20:05:08.254738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.254766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.254774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.024 [2024-09-30 20:05:08.254782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.024 [2024-09-30 20:05:08.254789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.254821] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.024 [2024-09-30 20:05:08.254831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.254838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.024 [2024-09-30 20:05:08.254849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:24.024 [2024-09-30 20:05:08.254855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.273876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.273903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.024 [2024-09-30 20:05:08.273912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.005 ms 00:21:24.024 [2024-09-30 20:05:08.273919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.273984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.024 [2024-09-30 20:05:08.273993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.024 [2024-09-30 20:05:08.274000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:24.024 [2024-09-30 20:05:08.274007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.024 [2024-09-30 20:05:08.274920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.508 ms, result 0 00:21:46.285  Copying: 1084/1048576 [kB] (1084 kBps) Copying: 9196/1048576 [kB] (8112 kBps) Copying: 61/1024 [MB] (52 MBps) Copying: 112/1024 [MB] (51 MBps) Copying: 171/1024 [MB] (59 MBps) Copying: 225/1024 [MB] (54 MBps) Copying: 276/1024 [MB] (50 MBps) Copying: 326/1024 [MB] (49 MBps) Copying: 382/1024 [MB] (55 MBps) Copying: 434/1024 [MB] (52 MBps) Copying: 486/1024 [MB] (51 MBps) Copying: 538/1024 [MB] (52 MBps) Copying: 590/1024 [MB] (51 MBps) Copying: 651/1024 [MB] (60 MBps) Copying: 705/1024 [MB] (54 MBps) Copying: 756/1024 [MB] (51 MBps) Copying: 808/1024 [MB] (51 MBps) Copying: 861/1024 [MB] (52 MBps) Copying: 912/1024 [MB] (50 MBps) Copying: 964/1024 [MB] (52 MBps) Copying: 1015/1024 [MB] (51 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-30 20:05:30.306612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.306686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:46.285 [2024-09-30 20:05:30.306701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:46.285 [2024-09-30 20:05:30.306710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.306732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:46.285 [2024-09-30 20:05:30.309516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.309554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:46.285 [2024-09-30 20:05:30.309565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:21:46.285 [2024-09-30 20:05:30.309573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.311220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.311245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:46.285 [2024-09-30 20:05:30.311254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:21:46.285 [2024-09-30 20:05:30.311262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.321443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.321477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:46.285 [2024-09-30 20:05:30.321488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.156 ms 00:21:46.285 [2024-09-30 20:05:30.321500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.327715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.327742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:46.285 [2024-09-30 20:05:30.327752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:21:46.285 [2024-09-30 20:05:30.327759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.355148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.355188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:46.285 [2024-09-30 20:05:30.355201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.337 ms 00:21:46.285 [2024-09-30 20:05:30.355209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.369901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.369934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:46.285 [2024-09-30 20:05:30.369947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:21:46.285 [2024-09-30 20:05:30.369955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.371978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.372009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:46.285 [2024-09-30 20:05:30.372019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.996 ms 00:21:46.285 [2024-09-30 20:05:30.372026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.397227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.397261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:46.285 [2024-09-30 20:05:30.397281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:21:46.285 [2024-09-30 20:05:30.397289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.420129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.420160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:46.285 [2024-09-30 20:05:30.420171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.807 ms 00:21:46.285 [2024-09-30 20:05:30.420179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.442745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.442775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:46.285 [2024-09-30 20:05:30.442785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.534 ms 00:21:46.285 [2024-09-30 20:05:30.442792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.465336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.285 [2024-09-30 20:05:30.465367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:46.285 [2024-09-30 20:05:30.465377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.488 ms 00:21:46.285 [2024-09-30 20:05:30.465384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.285 [2024-09-30 20:05:30.465413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:46.285 [2024-09-30 20:05:30.465429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:46.285 [2024-09-30 20:05:30.465445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:46.285 [2024-09-30 20:05:30.465454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:46.285 [2024-09-30 20:05:30.465702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.465994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:46.286 [2024-09-30 20:05:30.466203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:46.286 [2024-09-30 20:05:30.466211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b0c4c84-b803-4f53-bd37-677f95439c40 00:21:46.286 [2024-09-30 20:05:30.466219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:46.286 [2024-09-30 20:05:30.466226] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135616 00:21:46.286 [2024-09-30 20:05:30.466233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133632 00:21:46.286 [2024-09-30 20:05:30.466241] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:21:46.286 [2024-09-30 20:05:30.466248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:46.286 [2024-09-30 20:05:30.466255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:46.286 [2024-09-30 20:05:30.466279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:46.286 [2024-09-30 20:05:30.466286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:46.286 [2024-09-30 20:05:30.466292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:46.286 [2024-09-30 20:05:30.466301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.286 [2024-09-30 20:05:30.466309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:46.286 [2024-09-30 20:05:30.466323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:21:46.286 [2024-09-30 20:05:30.466333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.286 [2024-09-30 20:05:30.479360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.286 [2024-09-30 20:05:30.479391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:46.286 [2024-09-30 20:05:30.479401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.011 ms 00:21:46.286 [2024-09-30 20:05:30.479410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.286 [2024-09-30 20:05:30.479785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.286 [2024-09-30 20:05:30.479807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:46.286 [2024-09-30 20:05:30.479816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:21:46.286 [2024-09-30 20:05:30.479824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.286 [2024-09-30 20:05:30.509586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.286 [2024-09-30 20:05:30.509638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.286 [2024-09-30 20:05:30.509650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.286 [2024-09-30 20:05:30.509658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.286 [2024-09-30 20:05:30.509716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.286 [2024-09-30 20:05:30.509727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.286 [2024-09-30 20:05:30.509735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.287 [2024-09-30 20:05:30.509743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.287 [2024-09-30 20:05:30.509796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.287 [2024-09-30 20:05:30.509806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.287 [2024-09-30 20:05:30.509815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.287 [2024-09-30 20:05:30.509823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.287 [2024-09-30 20:05:30.509839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.287 [2024-09-30 20:05:30.509847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.287 [2024-09-30 20:05:30.509859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.287 [2024-09-30 20:05:30.509866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.287 [2024-09-30 20:05:30.590099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.287 [2024-09-30 20:05:30.590150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.287 [2024-09-30 20:05:30.590163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.287 [2024-09-30 20:05:30.590171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.545 [2024-09-30 20:05:30.656437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.545 [2024-09-30 20:05:30.656487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.545 [2024-09-30 20:05:30.656499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.545 [2024-09-30 20:05:30.656507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.545 [2024-09-30 20:05:30.656590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.545 [2024-09-30 20:05:30.656600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.545 [2024-09-30 20:05:30.656610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.545 [2024-09-30 20:05:30.656617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.545 [2024-09-30 20:05:30.656653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.545 [2024-09-30 20:05:30.656662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.545 [2024-09-30 20:05:30.656671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.545 [2024-09-30 20:05:30.656682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.545 [2024-09-30 20:05:30.656768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.545 [2024-09-30 20:05:30.656778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.545 [2024-09-30 20:05:30.656786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.545 [2024-09-30 20:05:30.656793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.545 [2024-09-30 20:05:30.656823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.545 [2024-09-30 20:05:30.656832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:46.546 [2024-09-30 20:05:30.656840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.546 [2024-09-30 20:05:30.656848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.546 [2024-09-30 20:05:30.656888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.546 [2024-09-30 20:05:30.656897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.546 [2024-09-30 20:05:30.656905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.546 [2024-09-30 20:05:30.656913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.546 [2024-09-30 20:05:30.656955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:46.546 [2024-09-30 20:05:30.656965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.546 [2024-09-30 20:05:30.656974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:46.546 [2024-09-30 20:05:30.656985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.546 [2024-09-30 20:05:30.657105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.466 ms, result 0 00:21:47.921 00:21:47.921 00:21:47.921 20:05:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:49.863 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:49.863 20:05:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:49.863 [2024-09-30 20:05:34.226460] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:49.863 [2024-09-30 20:05:34.226596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77190 ] 00:21:50.120 [2024-09-30 20:05:34.378929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.378 [2024-09-30 20:05:34.590576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.635 [2024-09-30 20:05:34.862495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.635 [2024-09-30 20:05:34.862564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:50.894 [2024-09-30 20:05:35.017657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.017716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:50.894 [2024-09-30 20:05:35.017731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:50.894 [2024-09-30 20:05:35.017744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.017789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.017799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:50.894 [2024-09-30 20:05:35.017807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:50.894 [2024-09-30 20:05:35.017814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.017833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:50.894 [2024-09-30 20:05:35.018523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:50.894 [2024-09-30 20:05:35.018541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.018549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:50.894 [2024-09-30 20:05:35.018558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:21:50.894 [2024-09-30 20:05:35.018565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.019889] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:50.894 [2024-09-30 20:05:35.032669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.032705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:50.894 [2024-09-30 20:05:35.032718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.781 ms 00:21:50.894 [2024-09-30 20:05:35.032725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.032781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.032791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:50.894 [2024-09-30 20:05:35.032799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:50.894 [2024-09-30 20:05:35.032806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.039380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.039412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:50.894 [2024-09-30 20:05:35.039423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.515 ms 00:21:50.894 [2024-09-30 20:05:35.039430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.039515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.039524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:50.894 [2024-09-30 20:05:35.039532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:50.894 [2024-09-30 20:05:35.039540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.039586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.039596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:50.894 [2024-09-30 20:05:35.039604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:50.894 [2024-09-30 20:05:35.039611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.039634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:50.894 [2024-09-30 20:05:35.043314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.043341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:50.894 [2024-09-30 20:05:35.043351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:21:50.894 [2024-09-30 20:05:35.043359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.043388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.043396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:50.894 [2024-09-30 20:05:35.043405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:50.894 [2024-09-30 20:05:35.043412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.043443] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:50.894 [2024-09-30 20:05:35.043462] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:50.894 [2024-09-30 20:05:35.043498] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:50.894 [2024-09-30 20:05:35.043514] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:50.894 [2024-09-30 20:05:35.043618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:50.894 [2024-09-30 20:05:35.043629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:50.894 [2024-09-30 20:05:35.043639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:50.894 [2024-09-30 20:05:35.043651] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:50.894 [2024-09-30 20:05:35.043660] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:50.894 [2024-09-30 20:05:35.043669] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:50.894 [2024-09-30 20:05:35.043677] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:50.894 [2024-09-30 20:05:35.043684] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:50.894 [2024-09-30 20:05:35.043692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:50.894 [2024-09-30 20:05:35.043699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.043706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:50.894 [2024-09-30 20:05:35.043715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:21:50.894 [2024-09-30 20:05:35.043721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.043804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.894 [2024-09-30 20:05:35.043814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:50.894 [2024-09-30 20:05:35.043821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:50.894 [2024-09-30 20:05:35.043828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.894 [2024-09-30 20:05:35.043940] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:50.894 [2024-09-30 20:05:35.043950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:50.894 [2024-09-30 20:05:35.043958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.894 [2024-09-30 20:05:35.043966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.894 [2024-09-30 20:05:35.043974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:50.894 [2024-09-30 20:05:35.043981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:50.894 [2024-09-30 20:05:35.043987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:50.894 [2024-09-30 20:05:35.043994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:50.894 [2024-09-30 20:05:35.044000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.894 [2024-09-30 20:05:35.044013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:50.894 [2024-09-30 20:05:35.044020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:50.894 [2024-09-30 20:05:35.044026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:50.894 [2024-09-30 20:05:35.044039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:50.894 [2024-09-30 20:05:35.044047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:50.894 [2024-09-30 20:05:35.044053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:50.894 [2024-09-30 20:05:35.044066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:50.894 [2024-09-30 20:05:35.044075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:50.894 [2024-09-30 20:05:35.044089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.894 [2024-09-30 20:05:35.044102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:50.894 [2024-09-30 20:05:35.044108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.894 [2024-09-30 20:05:35.044121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:50.894 [2024-09-30 20:05:35.044127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.894 [2024-09-30 20:05:35.044140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:50.894 [2024-09-30 20:05:35.044146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:50.894 [2024-09-30 20:05:35.044153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:50.895 [2024-09-30 20:05:35.044159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:50.895 [2024-09-30 20:05:35.044165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:50.895 [2024-09-30 20:05:35.044171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.895 [2024-09-30 20:05:35.044178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:50.895 [2024-09-30 20:05:35.044184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:50.895 [2024-09-30 20:05:35.044190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:50.895 [2024-09-30 20:05:35.044196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:50.895 [2024-09-30 20:05:35.044202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:50.895 [2024-09-30 20:05:35.044209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.895 [2024-09-30 20:05:35.044215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:50.895 [2024-09-30 20:05:35.044221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:50.895 [2024-09-30 20:05:35.044227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.895 [2024-09-30 20:05:35.044233] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:50.895 [2024-09-30 20:05:35.044241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:50.895 [2024-09-30 20:05:35.044250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:50.895 [2024-09-30 20:05:35.044257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:50.895 [2024-09-30 20:05:35.044265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:50.895 [2024-09-30 20:05:35.044284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:50.895 [2024-09-30 20:05:35.044291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:50.895 [2024-09-30 20:05:35.044299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:50.895 [2024-09-30 20:05:35.044306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:50.895 [2024-09-30 20:05:35.044312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:50.895 [2024-09-30 20:05:35.044322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:50.895 [2024-09-30 20:05:35.044332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:50.895 [2024-09-30 20:05:35.044348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:50.895 [2024-09-30 20:05:35.044355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:50.895 [2024-09-30 20:05:35.044362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:50.895 [2024-09-30 20:05:35.044370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:50.895 [2024-09-30 20:05:35.044377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:50.895 [2024-09-30 20:05:35.044384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:50.895 [2024-09-30 20:05:35.044391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:50.895 [2024-09-30 20:05:35.044398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:50.895 [2024-09-30 20:05:35.044405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:50.895 [2024-09-30 20:05:35.044441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:50.895 [2024-09-30 20:05:35.044449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:50.895 [2024-09-30 20:05:35.044463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:50.895 [2024-09-30 20:05:35.044470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:50.895 [2024-09-30 20:05:35.044477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:50.895 [2024-09-30 20:05:35.044484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.044491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:50.895 [2024-09-30 20:05:35.044499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:21:50.895 [2024-09-30 20:05:35.044506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.085126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.085173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:50.895 [2024-09-30 20:05:35.085186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.573 ms 00:21:50.895 [2024-09-30 20:05:35.085195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.085307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.085317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:50.895 [2024-09-30 20:05:35.085326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:50.895 [2024-09-30 20:05:35.085335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.117745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.117799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:50.895 [2024-09-30 20:05:35.117812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.346 ms 00:21:50.895 [2024-09-30 20:05:35.117821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.117857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.117866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:50.895 [2024-09-30 20:05:35.117876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:50.895 [2024-09-30 20:05:35.117883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.118353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.118369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:50.895 [2024-09-30 20:05:35.118379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:21:50.895 [2024-09-30 20:05:35.118391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.118524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.118534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:50.895 [2024-09-30 20:05:35.118542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:21:50.895 [2024-09-30 20:05:35.118549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.131855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.131884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:50.895 [2024-09-30 20:05:35.131894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.284 ms 00:21:50.895 [2024-09-30 20:05:35.131902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.144848] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:50.895 [2024-09-30 20:05:35.144881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:50.895 [2024-09-30 20:05:35.144893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.144902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:50.895 [2024-09-30 20:05:35.144911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:21:50.895 [2024-09-30 20:05:35.144918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.169452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.169488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:50.895 [2024-09-30 20:05:35.169499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.491 ms 00:21:50.895 [2024-09-30 20:05:35.169507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.180784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.180815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:50.895 [2024-09-30 20:05:35.180824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.232 ms 00:21:50.895 [2024-09-30 20:05:35.180832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.191935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.191963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:50.895 [2024-09-30 20:05:35.191972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:21:50.895 [2024-09-30 20:05:35.191979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.192613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.192634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:50.895 [2024-09-30 20:05:35.192643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:21:50.895 [2024-09-30 20:05:35.192651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.895 [2024-09-30 20:05:35.251215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.895 [2024-09-30 20:05:35.251287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:50.895 [2024-09-30 20:05:35.251301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.544 ms 00:21:50.895 [2024-09-30 20:05:35.251310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.153 [2024-09-30 20:05:35.261808] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:51.153 [2024-09-30 20:05:35.264797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.153 [2024-09-30 20:05:35.264829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:51.153 [2024-09-30 20:05:35.264841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.429 ms 00:21:51.153 [2024-09-30 20:05:35.264854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.153 [2024-09-30 20:05:35.264966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.153 [2024-09-30 20:05:35.264977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:51.153 [2024-09-30 20:05:35.264987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:51.154 [2024-09-30 20:05:35.264994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.265698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.154 [2024-09-30 20:05:35.265728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:51.154 [2024-09-30 20:05:35.265738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:21:51.154 [2024-09-30 20:05:35.265746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.265777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.154 [2024-09-30 20:05:35.265786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:51.154 [2024-09-30 20:05:35.265794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:51.154 [2024-09-30 20:05:35.265802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.265837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:51.154 [2024-09-30 20:05:35.265847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.154 [2024-09-30 20:05:35.265855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:51.154 [2024-09-30 20:05:35.265866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:51.154 [2024-09-30 20:05:35.265874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.289416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.154 [2024-09-30 20:05:35.289455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:51.154 [2024-09-30 20:05:35.289466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:21:51.154 [2024-09-30 20:05:35.289474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.289549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.154 [2024-09-30 20:05:35.289560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:51.154 [2024-09-30 20:05:35.289570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:51.154 [2024-09-30 20:05:35.289577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.154 [2024-09-30 20:05:35.290604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.466 ms, result 0 00:22:14.621  Copying: 46/1024 [MB] (46 MBps) Copying: 93/1024 [MB] (46 MBps) Copying: 141/1024 [MB] (48 MBps) Copying: 186/1024 [MB] (44 MBps) Copying: 235/1024 [MB] (48 MBps) Copying: 283/1024 [MB] (47 MBps) Copying: 328/1024 [MB] (44 MBps) Copying: 375/1024 [MB] (47 MBps) Copying: 420/1024 [MB] (44 MBps) Copying: 464/1024 [MB] (43 MBps) Copying: 514/1024 [MB] (49 MBps) Copying: 558/1024 [MB] (44 MBps) Copying: 605/1024 [MB] (46 MBps) Copying: 651/1024 [MB] (45 MBps) Copying: 695/1024 [MB] (44 MBps) Copying: 734/1024 [MB] (38 MBps) Copying: 768/1024 [MB] (34 MBps) Copying: 812/1024 [MB] (43 MBps) Copying: 852/1024 [MB] (40 MBps) Copying: 892/1024 [MB] (39 MBps) Copying: 933/1024 [MB] (41 MBps) Copying: 972/1024 [MB] (38 MBps) Copying: 1018/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-09-30 20:05:58.775580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.775664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:14.621 [2024-09-30 20:05:58.775685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:14.621 [2024-09-30 20:05:58.775703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.775734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:14.621 [2024-09-30 20:05:58.779702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.779738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:14.621 [2024-09-30 20:05:58.779751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.947 ms 00:22:14.621 [2024-09-30 20:05:58.779763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.780090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.780112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:14.621 [2024-09-30 20:05:58.780141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:14.621 [2024-09-30 20:05:58.781045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.786761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.786784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:14.621 [2024-09-30 20:05:58.786794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.688 ms 00:22:14.621 [2024-09-30 20:05:58.786804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.792968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.792991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:14.621 [2024-09-30 20:05:58.792999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.147 ms 00:22:14.621 [2024-09-30 20:05:58.793006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.817058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.817087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:14.621 [2024-09-30 20:05:58.817098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.992 ms 00:22:14.621 [2024-09-30 20:05:58.817106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.831286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.831327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:14.621 [2024-09-30 20:05:58.831337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.159 ms 00:22:14.621 [2024-09-30 20:05:58.831345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.833231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.833256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:14.621 [2024-09-30 20:05:58.833275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:22:14.621 [2024-09-30 20:05:58.833283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.856250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.856283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:14.621 [2024-09-30 20:05:58.856294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.952 ms 00:22:14.621 [2024-09-30 20:05:58.856301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.878871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.878897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:14.621 [2024-09-30 20:05:58.878907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.553 ms 00:22:14.621 [2024-09-30 20:05:58.878914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.901281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.901305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:14.621 [2024-09-30 20:05:58.901315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.349 ms 00:22:14.621 [2024-09-30 20:05:58.901323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.923151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.621 [2024-09-30 20:05:58.923176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:14.621 [2024-09-30 20:05:58.923185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.788 ms 00:22:14.621 [2024-09-30 20:05:58.923193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.621 [2024-09-30 20:05:58.923210] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:14.621 [2024-09-30 20:05:58.923224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:14.621 [2024-09-30 20:05:58.923234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:14.621 [2024-09-30 20:05:58.923243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:14.621 [2024-09-30 20:05:58.923251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:14.621 [2024-09-30 20:05:58.923259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:14.621 [2024-09-30 20:05:58.923274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:14.621 [2024-09-30 20:05:58.923283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:14.622 [2024-09-30 20:05:58.923947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:14.623 [2024-09-30 20:05:58.923955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:14.623 [2024-09-30 20:05:58.923962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:14.623 [2024-09-30 20:05:58.923969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:14.623 [2024-09-30 20:05:58.923985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:14.623 [2024-09-30 20:05:58.923993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b0c4c84-b803-4f53-bd37-677f95439c40 00:22:14.623 [2024-09-30 20:05:58.924001] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:14.623 [2024-09-30 20:05:58.924010] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:14.623 [2024-09-30 20:05:58.924018] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:14.623 [2024-09-30 20:05:58.924026] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:14.623 [2024-09-30 20:05:58.924033] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:14.623 [2024-09-30 20:05:58.924045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:14.623 [2024-09-30 20:05:58.924052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:14.623 [2024-09-30 20:05:58.924058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:14.623 [2024-09-30 20:05:58.924064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:14.623 [2024-09-30 20:05:58.924071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.623 [2024-09-30 20:05:58.924088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:14.623 [2024-09-30 20:05:58.924097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:22:14.623 [2024-09-30 20:05:58.924104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.937043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.623 [2024-09-30 20:05:58.937066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:14.623 [2024-09-30 20:05:58.937077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.924 ms 00:22:14.623 [2024-09-30 20:05:58.937089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.937450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.623 [2024-09-30 20:05:58.937465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:14.623 [2024-09-30 20:05:58.937473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:22:14.623 [2024-09-30 20:05:58.937480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.967015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.623 [2024-09-30 20:05:58.967050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.623 [2024-09-30 20:05:58.967060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.623 [2024-09-30 20:05:58.967067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.967124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.623 [2024-09-30 20:05:58.967132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.623 [2024-09-30 20:05:58.967140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.623 [2024-09-30 20:05:58.967148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.967206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.623 [2024-09-30 20:05:58.967217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.623 [2024-09-30 20:05:58.967225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.623 [2024-09-30 20:05:58.967236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.623 [2024-09-30 20:05:58.967252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.623 [2024-09-30 20:05:58.967260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.623 [2024-09-30 20:05:58.967280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.623 [2024-09-30 20:05:58.967288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.048605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.048652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.881 [2024-09-30 20:05:59.048668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.048676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.114536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.114592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.881 [2024-09-30 20:05:59.114604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.114612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.114694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.114703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.881 [2024-09-30 20:05:59.114711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.114719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.114758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.114767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.881 [2024-09-30 20:05:59.114775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.114783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.114871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.114881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.881 [2024-09-30 20:05:59.114889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.114896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.114938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.114957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:14.881 [2024-09-30 20:05:59.114965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.114973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.115013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.115075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.881 [2024-09-30 20:05:59.115083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.115090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.115138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:14.881 [2024-09-30 20:05:59.115153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.881 [2024-09-30 20:05:59.115161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:14.881 [2024-09-30 20:05:59.115169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.881 [2024-09-30 20:05:59.115307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.688 ms, result 0 00:22:15.815 00:22:15.815 00:22:15.815 20:06:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:17.715 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:17.715 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:17.715 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:17.715 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:17.715 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 75966 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 75966 ']' 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 75966 00:22:17.973 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75966) - No such process 00:22:17.973 Process with pid 75966 is not found 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 75966 is not found' 00:22:17.973 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:18.232 Remove shared memory files 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:18.232 00:22:18.232 real 2m20.592s 00:22:18.232 user 2m38.205s 00:22:18.232 sys 0m22.782s 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.232 20:06:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:18.232 ************************************ 00:22:18.232 END TEST ftl_dirty_shutdown 00:22:18.232 ************************************ 00:22:18.232 20:06:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:18.232 20:06:02 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:18.232 20:06:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.232 20:06:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:18.232 ************************************ 00:22:18.232 START TEST ftl_upgrade_shutdown 00:22:18.232 ************************************ 00:22:18.232 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:18.490 * Looking for test storage... 00:22:18.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:18.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.490 --rc genhtml_branch_coverage=1 00:22:18.490 --rc genhtml_function_coverage=1 00:22:18.490 --rc genhtml_legend=1 00:22:18.490 --rc geninfo_all_blocks=1 00:22:18.490 --rc geninfo_unexecuted_blocks=1 00:22:18.490 00:22:18.490 ' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:18.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.490 --rc genhtml_branch_coverage=1 00:22:18.490 --rc genhtml_function_coverage=1 00:22:18.490 --rc genhtml_legend=1 00:22:18.490 --rc geninfo_all_blocks=1 00:22:18.490 --rc geninfo_unexecuted_blocks=1 00:22:18.490 00:22:18.490 ' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:18.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.490 --rc genhtml_branch_coverage=1 00:22:18.490 --rc genhtml_function_coverage=1 00:22:18.490 --rc genhtml_legend=1 00:22:18.490 --rc geninfo_all_blocks=1 00:22:18.490 --rc geninfo_unexecuted_blocks=1 00:22:18.490 00:22:18.490 ' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:18.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.490 --rc genhtml_branch_coverage=1 00:22:18.490 --rc genhtml_function_coverage=1 00:22:18.490 --rc genhtml_legend=1 00:22:18.490 --rc geninfo_all_blocks=1 00:22:18.490 --rc geninfo_unexecuted_blocks=1 00:22:18.490 00:22:18.490 ' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:18.490 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77559 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77559 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77559 ']' 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:18.491 20:06:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:18.491 [2024-09-30 20:06:02.750492] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:18.491 [2024-09-30 20:06:02.750591] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77559 ] 00:22:18.749 [2024-09-30 20:06:02.894686] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.749 [2024-09-30 20:06:03.107927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:19.684 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:19.685 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:19.685 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:19.685 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:19.685 20:06:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:19.685 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:19.943 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:19.943 { 00:22:19.943 "name": "basen1", 00:22:19.943 "aliases": [ 00:22:19.943 "33248180-39ff-46d4-86a9-37dfe6e7e57c" 00:22:19.943 ], 00:22:19.943 "product_name": "NVMe disk", 00:22:19.943 "block_size": 4096, 00:22:19.943 "num_blocks": 1310720, 00:22:19.943 "uuid": "33248180-39ff-46d4-86a9-37dfe6e7e57c", 00:22:19.943 "numa_id": -1, 00:22:19.943 "assigned_rate_limits": { 00:22:19.943 "rw_ios_per_sec": 0, 00:22:19.943 "rw_mbytes_per_sec": 0, 00:22:19.943 "r_mbytes_per_sec": 0, 00:22:19.943 "w_mbytes_per_sec": 0 00:22:19.943 }, 00:22:19.943 "claimed": true, 00:22:19.943 "claim_type": "read_many_write_one", 00:22:19.943 "zoned": false, 00:22:19.943 "supported_io_types": { 00:22:19.943 "read": true, 00:22:19.943 "write": true, 00:22:19.943 "unmap": true, 00:22:19.943 "flush": true, 00:22:19.943 "reset": true, 00:22:19.943 "nvme_admin": true, 00:22:19.943 "nvme_io": true, 00:22:19.943 "nvme_io_md": false, 00:22:19.943 "write_zeroes": true, 00:22:19.943 "zcopy": false, 00:22:19.943 "get_zone_info": false, 00:22:19.943 "zone_management": false, 00:22:19.943 "zone_append": false, 00:22:19.943 "compare": true, 00:22:19.943 "compare_and_write": false, 00:22:19.943 "abort": true, 00:22:19.943 "seek_hole": false, 00:22:19.943 "seek_data": false, 00:22:19.943 "copy": true, 00:22:19.944 "nvme_iov_md": false 00:22:19.944 }, 00:22:19.944 "driver_specific": { 00:22:19.944 "nvme": [ 00:22:19.944 { 00:22:19.944 "pci_address": "0000:00:11.0", 00:22:19.944 "trid": { 00:22:19.944 "trtype": "PCIe", 00:22:19.944 "traddr": "0000:00:11.0" 00:22:19.944 }, 00:22:19.944 "ctrlr_data": { 00:22:19.944 "cntlid": 0, 00:22:19.944 "vendor_id": "0x1b36", 00:22:19.944 "model_number": "QEMU NVMe Ctrl", 00:22:19.944 "serial_number": "12341", 00:22:19.944 "firmware_revision": "8.0.0", 00:22:19.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:19.944 "oacs": { 00:22:19.944 "security": 0, 00:22:19.944 "format": 1, 00:22:19.944 "firmware": 0, 00:22:19.944 "ns_manage": 1 00:22:19.944 }, 00:22:19.944 "multi_ctrlr": false, 00:22:19.944 "ana_reporting": false 00:22:19.944 }, 00:22:19.944 "vs": { 00:22:19.944 "nvme_version": "1.4" 00:22:19.944 }, 00:22:19.944 "ns_data": { 00:22:19.944 "id": 1, 00:22:19.944 "can_share": false 00:22:19.944 } 00:22:19.944 } 00:22:19.944 ], 00:22:19.944 "mp_policy": "active_passive" 00:22:19.944 } 00:22:19.944 } 00:22:19.944 ]' 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:19.944 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:20.201 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=1285ac50-f123-4ee9-b322-3024d948b416 00:22:20.201 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:20.201 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1285ac50-f123-4ee9-b322-3024d948b416 00:22:20.458 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:20.716 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b8b0bb17-bacf-45c1-bad0-b06ab108b40c 00:22:20.716 20:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b8b0bb17-bacf-45c1-bad0-b06ab108b40c 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=20e71e18-745f-49ec-a296-5cf3aa75972b 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 20e71e18-745f-49ec-a296-5cf3aa75972b ]] 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 20e71e18-745f-49ec-a296-5cf3aa75972b 5120 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=20e71e18-745f-49ec-a296-5cf3aa75972b 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 20e71e18-745f-49ec-a296-5cf3aa75972b 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=20e71e18-745f-49ec-a296-5cf3aa75972b 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20e71e18-745f-49ec-a296-5cf3aa75972b 00:22:20.986 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:20.986 { 00:22:20.986 "name": "20e71e18-745f-49ec-a296-5cf3aa75972b", 00:22:20.986 "aliases": [ 00:22:20.986 "lvs/basen1p0" 00:22:20.986 ], 00:22:20.986 "product_name": "Logical Volume", 00:22:20.986 "block_size": 4096, 00:22:20.986 "num_blocks": 5242880, 00:22:20.986 "uuid": "20e71e18-745f-49ec-a296-5cf3aa75972b", 00:22:20.986 "assigned_rate_limits": { 00:22:20.986 "rw_ios_per_sec": 0, 00:22:20.986 "rw_mbytes_per_sec": 0, 00:22:20.986 "r_mbytes_per_sec": 0, 00:22:20.986 "w_mbytes_per_sec": 0 00:22:20.986 }, 00:22:20.986 "claimed": false, 00:22:20.986 "zoned": false, 00:22:20.986 "supported_io_types": { 00:22:20.986 "read": true, 00:22:20.986 "write": true, 00:22:20.986 "unmap": true, 00:22:20.986 "flush": false, 00:22:20.986 "reset": true, 00:22:20.986 "nvme_admin": false, 00:22:20.986 "nvme_io": false, 00:22:20.986 "nvme_io_md": false, 00:22:20.986 "write_zeroes": true, 00:22:20.986 "zcopy": false, 00:22:20.986 "get_zone_info": false, 00:22:20.986 "zone_management": false, 00:22:20.986 "zone_append": false, 00:22:20.986 "compare": false, 00:22:20.986 "compare_and_write": false, 00:22:20.986 "abort": false, 00:22:20.986 "seek_hole": true, 00:22:20.986 "seek_data": true, 00:22:20.986 "copy": false, 00:22:20.986 "nvme_iov_md": false 00:22:20.986 }, 00:22:20.986 "driver_specific": { 00:22:20.986 "lvol": { 00:22:20.986 "lvol_store_uuid": "b8b0bb17-bacf-45c1-bad0-b06ab108b40c", 00:22:20.986 "base_bdev": "basen1", 00:22:20.987 "thin_provision": true, 00:22:20.987 "num_allocated_clusters": 0, 00:22:20.987 "snapshot": false, 00:22:20.987 "clone": false, 00:22:20.987 "esnap_clone": false 00:22:20.987 } 00:22:20.987 } 00:22:20.987 } 00:22:20.987 ]' 00:22:20.987 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:21.278 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:21.549 20:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 20e71e18-745f-49ec-a296-5cf3aa75972b -c cachen1p0 --l2p_dram_limit 2 00:22:21.814 [2024-09-30 20:06:05.994476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.814 [2024-09-30 20:06:05.994519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:21.814 [2024-09-30 20:06:05.994533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:21.815 [2024-09-30 20:06:05.994541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:05.994592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:05.994600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:21.815 [2024-09-30 20:06:05.994608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:22:21.815 [2024-09-30 20:06:05.994615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:05.994635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:21.815 [2024-09-30 20:06:05.995225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:21.815 [2024-09-30 20:06:05.995247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:05.995254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:21.815 [2024-09-30 20:06:05.995262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:22:21.815 [2024-09-30 20:06:05.995283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:05.995342] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a37396da-5ab1-4c37-990c-099c4b12bcca 00:22:21.815 [2024-09-30 20:06:05.996616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:05.996642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:21.815 [2024-09-30 20:06:05.996650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:22:21.815 [2024-09-30 20:06:05.996661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.003476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.003507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:21.815 [2024-09-30 20:06:06.003515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.777 ms 00:22:21.815 [2024-09-30 20:06:06.003524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.003559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.003568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:21.815 [2024-09-30 20:06:06.003575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:21.815 [2024-09-30 20:06:06.003590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.003647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.003657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:21.815 [2024-09-30 20:06:06.003664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:22:21.815 [2024-09-30 20:06:06.003671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.003689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:21.815 [2024-09-30 20:06:06.006934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.006973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:21.815 [2024-09-30 20:06:06.006983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.248 ms 00:22:21.815 [2024-09-30 20:06:06.006990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.007016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.007023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:21.815 [2024-09-30 20:06:06.007032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:21.815 [2024-09-30 20:06:06.007041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.007056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:21.815 [2024-09-30 20:06:06.007164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:21.815 [2024-09-30 20:06:06.007177] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:21.815 [2024-09-30 20:06:06.007186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:21.815 [2024-09-30 20:06:06.007198] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:21.815 [2024-09-30 20:06:06.007220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:21.815 [2024-09-30 20:06:06.007228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:21.815 [2024-09-30 20:06:06.007234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:21.815 [2024-09-30 20:06:06.007242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.007248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:21.815 [2024-09-30 20:06:06.007256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.187 ms 00:22:21.815 [2024-09-30 20:06:06.007261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.007339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.815 [2024-09-30 20:06:06.007361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:21.815 [2024-09-30 20:06:06.007369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:22:21.815 [2024-09-30 20:06:06.007375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.815 [2024-09-30 20:06:06.007452] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:21.815 [2024-09-30 20:06:06.007463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:21.815 [2024-09-30 20:06:06.007472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:21.815 [2024-09-30 20:06:06.007491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:21.815 [2024-09-30 20:06:06.007504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:21.815 [2024-09-30 20:06:06.007510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:21.815 [2024-09-30 20:06:06.007517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:21.815 [2024-09-30 20:06:06.007530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:21.815 [2024-09-30 20:06:06.007537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:21.815 [2024-09-30 20:06:06.007549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:21.815 [2024-09-30 20:06:06.007554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:21.815 [2024-09-30 20:06:06.007569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:21.815 [2024-09-30 20:06:06.007575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:21.815 [2024-09-30 20:06:06.007588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:21.815 [2024-09-30 20:06:06.007605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:21.815 [2024-09-30 20:06:06.007624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:21.815 [2024-09-30 20:06:06.007641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:21.815 [2024-09-30 20:06:06.007660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:21.815 [2024-09-30 20:06:06.007677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:21.815 [2024-09-30 20:06:06.007695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:21.815 [2024-09-30 20:06:06.007713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:21.815 [2024-09-30 20:06:06.007721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:21.815 [2024-09-30 20:06:06.007734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:21.815 [2024-09-30 20:06:06.007742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:21.815 [2024-09-30 20:06:06.007749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:21.815 [2024-09-30 20:06:06.007755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:21.815 [2024-09-30 20:06:06.007764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:21.815 [2024-09-30 20:06:06.007770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:21.815 [2024-09-30 20:06:06.007776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:21.816 [2024-09-30 20:06:06.007781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:21.816 [2024-09-30 20:06:06.007788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:21.816 [2024-09-30 20:06:06.007797] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:21.816 [2024-09-30 20:06:06.007807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:21.816 [2024-09-30 20:06:06.007820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:21.816 [2024-09-30 20:06:06.007839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:21.816 [2024-09-30 20:06:06.007846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:21.816 [2024-09-30 20:06:06.007852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:21.816 [2024-09-30 20:06:06.007859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:21.816 [2024-09-30 20:06:06.007903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:21.816 [2024-09-30 20:06:06.007910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.816 [2024-09-30 20:06:06.007924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:21.816 [2024-09-30 20:06:06.007931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:21.816 [2024-09-30 20:06:06.007940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:21.816 [2024-09-30 20:06:06.007946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:21.816 [2024-09-30 20:06:06.007954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:21.816 [2024-09-30 20:06:06.007959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:22:21.816 [2024-09-30 20:06:06.007967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:21.816 [2024-09-30 20:06:06.008010] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:21.816 [2024-09-30 20:06:06.008022] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:24.347 [2024-09-30 20:06:08.285324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.347 [2024-09-30 20:06:08.285398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:24.347 [2024-09-30 20:06:08.285415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2277.302 ms 00:22:24.347 [2024-09-30 20:06:08.285425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.347 [2024-09-30 20:06:08.313852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.347 [2024-09-30 20:06:08.313908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:24.347 [2024-09-30 20:06:08.313923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.208 ms 00:22:24.347 [2024-09-30 20:06:08.313933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.347 [2024-09-30 20:06:08.314026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.347 [2024-09-30 20:06:08.314039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:24.347 [2024-09-30 20:06:08.314048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:24.347 [2024-09-30 20:06:08.314064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.347 [2024-09-30 20:06:08.355003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.347 [2024-09-30 20:06:08.355074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:24.347 [2024-09-30 20:06:08.355096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.871 ms 00:22:24.347 [2024-09-30 20:06:08.355113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.347 [2024-09-30 20:06:08.355177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.347 [2024-09-30 20:06:08.355193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:24.348 [2024-09-30 20:06:08.355206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:24.348 [2024-09-30 20:06:08.355220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.355744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.355785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:24.348 [2024-09-30 20:06:08.355807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:22:24.348 [2024-09-30 20:06:08.355824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.355884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.355899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:24.348 [2024-09-30 20:06:08.355911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:24.348 [2024-09-30 20:06:08.355927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.373460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.373493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:24.348 [2024-09-30 20:06:08.373503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.508 ms 00:22:24.348 [2024-09-30 20:06:08.373513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.385569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:24.348 [2024-09-30 20:06:08.386606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.386633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:24.348 [2024-09-30 20:06:08.386648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.010 ms 00:22:24.348 [2024-09-30 20:06:08.386656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.409973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.410008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:24.348 [2024-09-30 20:06:08.410024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.285 ms 00:22:24.348 [2024-09-30 20:06:08.410032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.410118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.410129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:24.348 [2024-09-30 20:06:08.410142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:22:24.348 [2024-09-30 20:06:08.410150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.432411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.432443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:24.348 [2024-09-30 20:06:08.432456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.217 ms 00:22:24.348 [2024-09-30 20:06:08.432463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.454671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.454701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:24.348 [2024-09-30 20:06:08.454713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.170 ms 00:22:24.348 [2024-09-30 20:06:08.454721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.455292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.455309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:24.348 [2024-09-30 20:06:08.455319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:22:24.348 [2024-09-30 20:06:08.455327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.526286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.526331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:24.348 [2024-09-30 20:06:08.526348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.924 ms 00:22:24.348 [2024-09-30 20:06:08.526358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.550637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.550673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:24.348 [2024-09-30 20:06:08.550687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.195 ms 00:22:24.348 [2024-09-30 20:06:08.550694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.574045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.574083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:24.348 [2024-09-30 20:06:08.574095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.311 ms 00:22:24.348 [2024-09-30 20:06:08.574103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.597395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.597430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:24.348 [2024-09-30 20:06:08.597443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.253 ms 00:22:24.348 [2024-09-30 20:06:08.597451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.597491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.597501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:24.348 [2024-09-30 20:06:08.597516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:24.348 [2024-09-30 20:06:08.597524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.597603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:24.348 [2024-09-30 20:06:08.597612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:24.348 [2024-09-30 20:06:08.597623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:22:24.348 [2024-09-30 20:06:08.597630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:24.348 [2024-09-30 20:06:08.598759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2603.800 ms, result 0 00:22:24.348 { 00:22:24.348 "name": "ftl", 00:22:24.348 "uuid": "a37396da-5ab1-4c37-990c-099c4b12bcca" 00:22:24.348 } 00:22:24.348 20:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:24.606 [2024-09-30 20:06:08.813964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.606 20:06:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:24.864 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:24.864 [2024-09-30 20:06:09.210355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:24.864 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:25.121 [2024-09-30 20:06:09.407263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:25.121 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:25.688 Fill FTL, iteration 1 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77670 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77670 /var/tmp/spdk.tgt.sock 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77670 ']' 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.688 20:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.688 [2024-09-30 20:06:09.830679] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:25.688 [2024-09-30 20:06:09.830805] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77670 ] 00:22:25.688 [2024-09-30 20:06:09.973566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.947 [2024-09-30 20:06:10.188709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.514 20:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.514 20:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:26.514 20:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:26.772 ftln1 00:22:26.772 20:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:26.772 20:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77670 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77670 ']' 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77670 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77670 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:27.031 killing process with pid 77670 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77670' 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77670 00:22:27.031 20:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77670 00:22:28.933 20:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:28.933 20:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:28.933 [2024-09-30 20:06:12.839118] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:28.933 [2024-09-30 20:06:12.839249] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77718 ] 00:22:28.933 [2024-09-30 20:06:12.989126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.933 [2024-09-30 20:06:13.168325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.042  Copying: 262/1024 [MB] (262 MBps) Copying: 520/1024 [MB] (258 MBps) Copying: 775/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 261 MBps) 00:22:34.042 00:22:34.042 Calculate MD5 checksum, iteration 1 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:34.042 20:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:34.042 [2024-09-30 20:06:18.197005] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:34.042 [2024-09-30 20:06:18.197125] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77776 ] 00:22:34.042 [2024-09-30 20:06:18.345631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.299 [2024-09-30 20:06:18.524777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.814  Copying: 655/1024 [MB] (655 MBps) Copying: 1024/1024 [MB] (average 645 MBps) 00:22:36.814 00:22:36.814 20:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:36.814 20:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:38.801 Fill FTL, iteration 2 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cb916b8bd6dec5e54a2d26648e2e517b 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:38.801 20:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:38.801 [2024-09-30 20:06:23.046981] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:38.801 [2024-09-30 20:06:23.047090] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77832 ] 00:22:39.060 [2024-09-30 20:06:23.193054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.060 [2024-09-30 20:06:23.371431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.241  Copying: 261/1024 [MB] (261 MBps) Copying: 519/1024 [MB] (258 MBps) Copying: 769/1024 [MB] (250 MBps) Copying: 1023/1024 [MB] (254 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:22:44.241 00:22:44.241 Calculate MD5 checksum, iteration 2 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:44.241 20:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:44.241 [2024-09-30 20:06:28.461578] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:44.242 [2024-09-30 20:06:28.461698] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77895 ] 00:22:44.499 [2024-09-30 20:06:28.609757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.499 [2024-09-30 20:06:28.787562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.823  Copying: 671/1024 [MB] (671 MBps) Copying: 1024/1024 [MB] (average 661 MBps) 00:22:47.823 00:22:47.823 20:06:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:47.823 20:06:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:49.721 20:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:49.721 20:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6625a716351bb00f536855996807f6ad 00:22:49.721 20:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:49.721 20:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:49.721 20:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:49.721 [2024-09-30 20:06:34.034764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.721 [2024-09-30 20:06:34.034816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:49.721 [2024-09-30 20:06:34.034829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:49.721 [2024-09-30 20:06:34.034840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.721 [2024-09-30 20:06:34.034860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.721 [2024-09-30 20:06:34.034868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:49.721 [2024-09-30 20:06:34.034874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:49.721 [2024-09-30 20:06:34.034881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.722 [2024-09-30 20:06:34.034897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.722 [2024-09-30 20:06:34.034904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:49.722 [2024-09-30 20:06:34.034911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:49.722 [2024-09-30 20:06:34.034917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.722 [2024-09-30 20:06:34.034973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.201 ms, result 0 00:22:49.722 true 00:22:49.722 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:49.978 { 00:22:49.978 "name": "ftl", 00:22:49.978 "properties": [ 00:22:49.978 { 00:22:49.978 "name": "superblock_version", 00:22:49.978 "value": 5, 00:22:49.978 "read-only": true 00:22:49.978 }, 00:22:49.978 { 00:22:49.978 "name": "base_device", 00:22:49.978 "bands": [ 00:22:49.978 { 00:22:49.979 "id": 0, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 1, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 2, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 3, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 4, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 5, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 6, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 7, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 8, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 9, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 10, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 11, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 12, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 13, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 14, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 15, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 16, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 17, 00:22:49.979 "state": "FREE", 00:22:49.979 "validity": 0.0 00:22:49.979 } 00:22:49.979 ], 00:22:49.979 "read-only": true 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "name": "cache_device", 00:22:49.979 "type": "bdev", 00:22:49.979 "chunks": [ 00:22:49.979 { 00:22:49.979 "id": 0, 00:22:49.979 "state": "INACTIVE", 00:22:49.979 "utilization": 0.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 1, 00:22:49.979 "state": "CLOSED", 00:22:49.979 "utilization": 1.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 2, 00:22:49.979 "state": "CLOSED", 00:22:49.979 "utilization": 1.0 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 3, 00:22:49.979 "state": "OPEN", 00:22:49.979 "utilization": 0.001953125 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "id": 4, 00:22:49.979 "state": "OPEN", 00:22:49.979 "utilization": 0.0 00:22:49.979 } 00:22:49.979 ], 00:22:49.979 "read-only": true 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "name": "verbose_mode", 00:22:49.979 "value": true, 00:22:49.979 "unit": "", 00:22:49.979 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:49.979 }, 00:22:49.979 { 00:22:49.979 "name": "prep_upgrade_on_shutdown", 00:22:49.979 "value": false, 00:22:49.979 "unit": "", 00:22:49.979 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:49.979 } 00:22:49.979 ] 00:22:49.979 } 00:22:49.979 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:50.236 [2024-09-30 20:06:34.365733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.236 [2024-09-30 20:06:34.365772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:50.236 [2024-09-30 20:06:34.365784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:50.236 [2024-09-30 20:06:34.365791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.236 [2024-09-30 20:06:34.365808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.236 [2024-09-30 20:06:34.365815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:50.236 [2024-09-30 20:06:34.365821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:50.236 [2024-09-30 20:06:34.365828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.236 [2024-09-30 20:06:34.365844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.236 [2024-09-30 20:06:34.365850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:50.236 [2024-09-30 20:06:34.365857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:50.236 [2024-09-30 20:06:34.365862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.236 [2024-09-30 20:06:34.365908] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:22:50.236 true 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:50.236 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:50.493 [2024-09-30 20:06:34.786095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.493 [2024-09-30 20:06:34.786134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:50.493 [2024-09-30 20:06:34.786146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:50.493 [2024-09-30 20:06:34.786153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.493 [2024-09-30 20:06:34.786170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.493 [2024-09-30 20:06:34.786177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:50.493 [2024-09-30 20:06:34.786184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:50.493 [2024-09-30 20:06:34.786191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.493 [2024-09-30 20:06:34.786205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.493 [2024-09-30 20:06:34.786212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:50.493 [2024-09-30 20:06:34.786218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:50.493 [2024-09-30 20:06:34.786225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.493 [2024-09-30 20:06:34.786286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.168 ms, result 0 00:22:50.493 true 00:22:50.493 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:50.751 { 00:22:50.751 "name": "ftl", 00:22:50.751 "properties": [ 00:22:50.751 { 00:22:50.751 "name": "superblock_version", 00:22:50.751 "value": 5, 00:22:50.751 "read-only": true 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "name": "base_device", 00:22:50.751 "bands": [ 00:22:50.751 { 00:22:50.751 "id": 0, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 1, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 2, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 3, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 4, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 5, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 6, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 7, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 8, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 9, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 10, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 11, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 12, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 13, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 14, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 15, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 16, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "id": 17, 00:22:50.751 "state": "FREE", 00:22:50.751 "validity": 0.0 00:22:50.751 } 00:22:50.751 ], 00:22:50.751 "read-only": true 00:22:50.751 }, 00:22:50.751 { 00:22:50.751 "name": "cache_device", 00:22:50.751 "type": "bdev", 00:22:50.751 "chunks": [ 00:22:50.751 { 00:22:50.751 "id": 0, 00:22:50.751 "state": "INACTIVE", 00:22:50.751 "utilization": 0.0 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "id": 1, 00:22:50.752 "state": "CLOSED", 00:22:50.752 "utilization": 1.0 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "id": 2, 00:22:50.752 "state": "CLOSED", 00:22:50.752 "utilization": 1.0 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "id": 3, 00:22:50.752 "state": "OPEN", 00:22:50.752 "utilization": 0.001953125 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "id": 4, 00:22:50.752 "state": "OPEN", 00:22:50.752 "utilization": 0.0 00:22:50.752 } 00:22:50.752 ], 00:22:50.752 "read-only": true 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "name": "verbose_mode", 00:22:50.752 "value": true, 00:22:50.752 "unit": "", 00:22:50.752 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:50.752 }, 00:22:50.752 { 00:22:50.752 "name": "prep_upgrade_on_shutdown", 00:22:50.752 "value": true, 00:22:50.752 "unit": "", 00:22:50.752 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:50.752 } 00:22:50.752 ] 00:22:50.752 } 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77559 ]] 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77559 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77559 ']' 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77559 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.752 20:06:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77559 00:22:50.752 20:06:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:50.752 20:06:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:50.752 20:06:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77559' 00:22:50.752 killing process with pid 77559 00:22:50.752 20:06:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77559 00:22:50.752 20:06:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77559 00:22:51.318 [2024-09-30 20:06:35.563617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:51.318 [2024-09-30 20:06:35.575608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:51.318 [2024-09-30 20:06:35.575741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:51.318 [2024-09-30 20:06:35.575817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:51.318 [2024-09-30 20:06:35.575837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:51.318 [2024-09-30 20:06:35.575870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:51.318 [2024-09-30 20:06:35.578090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:51.318 [2024-09-30 20:06:35.578187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:51.318 [2024-09-30 20:06:35.578235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.190 ms 00:22:51.318 [2024-09-30 20:06:35.578253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.048809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.048987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:01.307 [2024-09-30 20:06:44.049041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8470.474 ms 00:23:01.307 [2024-09-30 20:06:44.049061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.050059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.050149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:01.307 [2024-09-30 20:06:44.050224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:23:01.307 [2024-09-30 20:06:44.050234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.051114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.051130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:01.307 [2024-09-30 20:06:44.051138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:23:01.307 [2024-09-30 20:06:44.051145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.059645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.059739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:01.307 [2024-09-30 20:06:44.059752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.472 ms 00:23:01.307 [2024-09-30 20:06:44.059758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.065826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.065852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:01.307 [2024-09-30 20:06:44.065862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.052 ms 00:23:01.307 [2024-09-30 20:06:44.065869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.065934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.065942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:01.307 [2024-09-30 20:06:44.065950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:23:01.307 [2024-09-30 20:06:44.065956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.074089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.074181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:01.307 [2024-09-30 20:06:44.074192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.121 ms 00:23:01.307 [2024-09-30 20:06:44.074198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.082118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.082141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:01.307 [2024-09-30 20:06:44.082148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.904 ms 00:23:01.307 [2024-09-30 20:06:44.082154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.089789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.089812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:01.307 [2024-09-30 20:06:44.089818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.621 ms 00:23:01.307 [2024-09-30 20:06:44.089824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.097467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.097490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:01.307 [2024-09-30 20:06:44.097497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.605 ms 00:23:01.307 [2024-09-30 20:06:44.097502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.097515] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:01.307 [2024-09-30 20:06:44.097527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:01.307 [2024-09-30 20:06:44.097535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:01.307 [2024-09-30 20:06:44.097541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:01.307 [2024-09-30 20:06:44.097548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:01.307 [2024-09-30 20:06:44.097649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:01.307 [2024-09-30 20:06:44.097655] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a37396da-5ab1-4c37-990c-099c4b12bcca 00:23:01.307 [2024-09-30 20:06:44.097662] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:01.307 [2024-09-30 20:06:44.097667] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:23:01.307 [2024-09-30 20:06:44.097675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:23:01.307 [2024-09-30 20:06:44.097681] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:23:01.307 [2024-09-30 20:06:44.097689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:01.307 [2024-09-30 20:06:44.097695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:01.307 [2024-09-30 20:06:44.097700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:01.307 [2024-09-30 20:06:44.097705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:01.307 [2024-09-30 20:06:44.097712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:01.307 [2024-09-30 20:06:44.097719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.097726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:01.307 [2024-09-30 20:06:44.097732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:23:01.307 [2024-09-30 20:06:44.097738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.107748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.107772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:01.307 [2024-09-30 20:06:44.107780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.998 ms 00:23:01.307 [2024-09-30 20:06:44.107786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.108073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.307 [2024-09-30 20:06:44.108081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:01.307 [2024-09-30 20:06:44.108087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:23:01.307 [2024-09-30 20:06:44.108094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.138971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.307 [2024-09-30 20:06:44.138999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:01.307 [2024-09-30 20:06:44.139008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.307 [2024-09-30 20:06:44.139015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.139038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.307 [2024-09-30 20:06:44.139045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:01.307 [2024-09-30 20:06:44.139052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.307 [2024-09-30 20:06:44.139059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.139122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.307 [2024-09-30 20:06:44.139135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:01.307 [2024-09-30 20:06:44.139143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.307 [2024-09-30 20:06:44.139149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.307 [2024-09-30 20:06:44.139163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.307 [2024-09-30 20:06:44.139170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:01.308 [2024-09-30 20:06:44.139177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.139183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.201795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.201834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:01.308 [2024-09-30 20:06:44.201844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.201851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:01.308 [2024-09-30 20:06:44.253559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:01.308 [2024-09-30 20:06:44.253657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:01.308 [2024-09-30 20:06:44.253732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:01.308 [2024-09-30 20:06:44.253828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:01.308 [2024-09-30 20:06:44.253882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:01.308 [2024-09-30 20:06:44.253939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.253948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.253991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:01.308 [2024-09-30 20:06:44.253999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:01.308 [2024-09-30 20:06:44.254006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:01.308 [2024-09-30 20:06:44.254013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.308 [2024-09-30 20:06:44.254125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8678.456 ms, result 0 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78085 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78085 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78085 ']' 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.606 20:06:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:04.606 [2024-09-30 20:06:48.708536] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:04.606 [2024-09-30 20:06:48.708849] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78085 ] 00:23:04.606 [2024-09-30 20:06:48.855138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.864 [2024-09-30 20:06:49.024864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.429 [2024-09-30 20:06:49.653892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:05.429 [2024-09-30 20:06:49.653948] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:05.689 [2024-09-30 20:06:49.802687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.802729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:05.689 [2024-09-30 20:06:49.802741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:05.689 [2024-09-30 20:06:49.802748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.802786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.802794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:05.689 [2024-09-30 20:06:49.802800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:23:05.689 [2024-09-30 20:06:49.802806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.802826] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:05.689 [2024-09-30 20:06:49.803347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:05.689 [2024-09-30 20:06:49.803360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.803366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:05.689 [2024-09-30 20:06:49.803373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:23:05.689 [2024-09-30 20:06:49.803382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.804621] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:05.689 [2024-09-30 20:06:49.815138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.815164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:05.689 [2024-09-30 20:06:49.815174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.519 ms 00:23:05.689 [2024-09-30 20:06:49.815181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.815227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.815235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:05.689 [2024-09-30 20:06:49.815241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:05.689 [2024-09-30 20:06:49.815247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.821411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.821566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:05.689 [2024-09-30 20:06:49.821579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.103 ms 00:23:05.689 [2024-09-30 20:06:49.821585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.821636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.821644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:05.689 [2024-09-30 20:06:49.821653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:23:05.689 [2024-09-30 20:06:49.821659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.821707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.821716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:05.689 [2024-09-30 20:06:49.821723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:05.689 [2024-09-30 20:06:49.821729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.821746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:05.689 [2024-09-30 20:06:49.824792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.824892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:05.689 [2024-09-30 20:06:49.824904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.050 ms 00:23:05.689 [2024-09-30 20:06:49.824911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.824938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.824944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:05.689 [2024-09-30 20:06:49.824955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:05.689 [2024-09-30 20:06:49.824961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.824977] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:05.689 [2024-09-30 20:06:49.824993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:05.689 [2024-09-30 20:06:49.825021] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:05.689 [2024-09-30 20:06:49.825033] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:05.689 [2024-09-30 20:06:49.825117] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:05.689 [2024-09-30 20:06:49.825129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:05.689 [2024-09-30 20:06:49.825137] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:05.689 [2024-09-30 20:06:49.825145] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:05.689 [2024-09-30 20:06:49.825164] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:05.689 [2024-09-30 20:06:49.825169] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:05.689 [2024-09-30 20:06:49.825176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:05.689 [2024-09-30 20:06:49.825182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.825189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:05.689 [2024-09-30 20:06:49.825196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:23:05.689 [2024-09-30 20:06:49.825203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.825281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.689 [2024-09-30 20:06:49.825289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:05.689 [2024-09-30 20:06:49.825295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:23:05.689 [2024-09-30 20:06:49.825301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.689 [2024-09-30 20:06:49.825377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:05.689 [2024-09-30 20:06:49.825386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:05.689 [2024-09-30 20:06:49.825393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:05.689 [2024-09-30 20:06:49.825413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:05.689 [2024-09-30 20:06:49.825425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:05.689 [2024-09-30 20:06:49.825431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:05.689 [2024-09-30 20:06:49.825438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:05.689 [2024-09-30 20:06:49.825451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:05.689 [2024-09-30 20:06:49.825457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:05.689 [2024-09-30 20:06:49.825468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:05.689 [2024-09-30 20:06:49.825473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:05.689 [2024-09-30 20:06:49.825483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:05.689 [2024-09-30 20:06:49.825488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:05.689 [2024-09-30 20:06:49.825499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:05.689 [2024-09-30 20:06:49.825503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:05.689 [2024-09-30 20:06:49.825514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:05.689 [2024-09-30 20:06:49.825525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:05.689 [2024-09-30 20:06:49.825536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:05.689 [2024-09-30 20:06:49.825540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:05.689 [2024-09-30 20:06:49.825551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:05.689 [2024-09-30 20:06:49.825556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:05.689 [2024-09-30 20:06:49.825566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:05.689 [2024-09-30 20:06:49.825571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:05.689 [2024-09-30 20:06:49.825581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:05.689 [2024-09-30 20:06:49.825586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.689 [2024-09-30 20:06:49.825592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:05.689 [2024-09-30 20:06:49.825597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:05.690 [2024-09-30 20:06:49.825602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.690 [2024-09-30 20:06:49.825613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:05.690 [2024-09-30 20:06:49.825619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:05.690 [2024-09-30 20:06:49.825626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.690 [2024-09-30 20:06:49.825631] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:05.690 [2024-09-30 20:06:49.825637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:05.690 [2024-09-30 20:06:49.825643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:05.690 [2024-09-30 20:06:49.825649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:05.690 [2024-09-30 20:06:49.825655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:05.690 [2024-09-30 20:06:49.825662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:05.690 [2024-09-30 20:06:49.825666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:05.690 [2024-09-30 20:06:49.825672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:05.690 [2024-09-30 20:06:49.825677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:05.690 [2024-09-30 20:06:49.825682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:05.690 [2024-09-30 20:06:49.825688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:05.690 [2024-09-30 20:06:49.825695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:05.690 [2024-09-30 20:06:49.825708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:05.690 [2024-09-30 20:06:49.825726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:05.690 [2024-09-30 20:06:49.825732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:05.690 [2024-09-30 20:06:49.825737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:05.690 [2024-09-30 20:06:49.825742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:05.690 [2024-09-30 20:06:49.825781] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:05.690 [2024-09-30 20:06:49.825787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:05.690 [2024-09-30 20:06:49.825799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:05.690 [2024-09-30 20:06:49.825804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:05.690 [2024-09-30 20:06:49.825811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:05.690 [2024-09-30 20:06:49.825817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:05.690 [2024-09-30 20:06:49.825824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:05.690 [2024-09-30 20:06:49.825832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.492 ms 00:23:05.690 [2024-09-30 20:06:49.825837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:05.690 [2024-09-30 20:06:49.825882] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:05.690 [2024-09-30 20:06:49.825891] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:09.895 [2024-09-30 20:06:53.556228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.556321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:09.895 [2024-09-30 20:06:53.556340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3730.328 ms 00:23:09.895 [2024-09-30 20:06:53.556358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.587444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.587506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:09.895 [2024-09-30 20:06:53.587521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.839 ms 00:23:09.895 [2024-09-30 20:06:53.587529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.587618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.587630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:09.895 [2024-09-30 20:06:53.587640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:09.895 [2024-09-30 20:06:53.587649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.635173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.635423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:09.895 [2024-09-30 20:06:53.635448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.483 ms 00:23:09.895 [2024-09-30 20:06:53.635458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.635505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.635515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:09.895 [2024-09-30 20:06:53.635526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:09.895 [2024-09-30 20:06:53.635535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.636150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.636173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:09.895 [2024-09-30 20:06:53.636184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:23:09.895 [2024-09-30 20:06:53.636192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.895 [2024-09-30 20:06:53.636247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.895 [2024-09-30 20:06:53.636257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:09.896 [2024-09-30 20:06:53.636286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:23:09.896 [2024-09-30 20:06:53.636295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.653110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.653156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:09.896 [2024-09-30 20:06:53.653168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.793 ms 00:23:09.896 [2024-09-30 20:06:53.653176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.667629] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:09.896 [2024-09-30 20:06:53.667676] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:09.896 [2024-09-30 20:06:53.667691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.667700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:23:09.896 [2024-09-30 20:06:53.667712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.357 ms 00:23:09.896 [2024-09-30 20:06:53.667720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.682165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.682211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:23:09.896 [2024-09-30 20:06:53.682224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.389 ms 00:23:09.896 [2024-09-30 20:06:53.682233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.694768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.694810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:23:09.896 [2024-09-30 20:06:53.694821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.452 ms 00:23:09.896 [2024-09-30 20:06:53.694829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.706888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.706931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:23:09.896 [2024-09-30 20:06:53.706942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.004 ms 00:23:09.896 [2024-09-30 20:06:53.706949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.707636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.707671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:09.896 [2024-09-30 20:06:53.707681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:23:09.896 [2024-09-30 20:06:53.707689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.772637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.772699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:09.896 [2024-09-30 20:06:53.772715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.925 ms 00:23:09.896 [2024-09-30 20:06:53.772724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.784231] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:09.896 [2024-09-30 20:06:53.785304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.785386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:09.896 [2024-09-30 20:06:53.785408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.514 ms 00:23:09.896 [2024-09-30 20:06:53.785418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.785531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.785543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:23:09.896 [2024-09-30 20:06:53.785552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:23:09.896 [2024-09-30 20:06:53.785561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.785624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.785636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:09.896 [2024-09-30 20:06:53.785645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:23:09.896 [2024-09-30 20:06:53.785657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.785681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.785691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:09.896 [2024-09-30 20:06:53.785699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:09.896 [2024-09-30 20:06:53.785708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.785747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:09.896 [2024-09-30 20:06:53.785758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.785767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:09.896 [2024-09-30 20:06:53.785775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:23:09.896 [2024-09-30 20:06:53.785784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.811114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.811161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:09.896 [2024-09-30 20:06:53.811174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.305 ms 00:23:09.896 [2024-09-30 20:06:53.811182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.811295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:53.811308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:09.896 [2024-09-30 20:06:53.811318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:23:09.896 [2024-09-30 20:06:53.811330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:53.812630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4009.390 ms, result 0 00:23:09.896 [2024-09-30 20:06:53.827538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.896 [2024-09-30 20:06:53.843526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:09.896 [2024-09-30 20:06:53.851692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:09.896 20:06:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.896 20:06:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:09.896 20:06:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:09.896 20:06:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:09.896 20:06:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:09.896 [2024-09-30 20:06:54.091672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:54.091712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:09.896 [2024-09-30 20:06:54.091725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:09.896 [2024-09-30 20:06:54.091733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:54.091755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:54.091764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:09.896 [2024-09-30 20:06:54.091772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:09.896 [2024-09-30 20:06:54.091781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:54.091800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.896 [2024-09-30 20:06:54.091810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:09.896 [2024-09-30 20:06:54.091819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:09.896 [2024-09-30 20:06:54.091826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.896 [2024-09-30 20:06:54.091880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.201 ms, result 0 00:23:09.896 true 00:23:09.896 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:10.170 { 00:23:10.170 "name": "ftl", 00:23:10.170 "properties": [ 00:23:10.170 { 00:23:10.170 "name": "superblock_version", 00:23:10.170 "value": 5, 00:23:10.170 "read-only": true 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "name": "base_device", 00:23:10.170 "bands": [ 00:23:10.170 { 00:23:10.170 "id": 0, 00:23:10.170 "state": "CLOSED", 00:23:10.170 "validity": 1.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 1, 00:23:10.170 "state": "CLOSED", 00:23:10.170 "validity": 1.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 2, 00:23:10.170 "state": "CLOSED", 00:23:10.170 "validity": 0.007843137254901933 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 3, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 4, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 5, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 6, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 7, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 8, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 9, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 10, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 11, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 12, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 13, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 14, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 15, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 16, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 17, 00:23:10.170 "state": "FREE", 00:23:10.170 "validity": 0.0 00:23:10.170 } 00:23:10.170 ], 00:23:10.170 "read-only": true 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "name": "cache_device", 00:23:10.170 "type": "bdev", 00:23:10.170 "chunks": [ 00:23:10.170 { 00:23:10.170 "id": 0, 00:23:10.170 "state": "INACTIVE", 00:23:10.170 "utilization": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 1, 00:23:10.170 "state": "OPEN", 00:23:10.170 "utilization": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 2, 00:23:10.170 "state": "OPEN", 00:23:10.170 "utilization": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 3, 00:23:10.170 "state": "FREE", 00:23:10.170 "utilization": 0.0 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "id": 4, 00:23:10.170 "state": "FREE", 00:23:10.170 "utilization": 0.0 00:23:10.170 } 00:23:10.170 ], 00:23:10.170 "read-only": true 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "name": "verbose_mode", 00:23:10.170 "value": true, 00:23:10.170 "unit": "", 00:23:10.170 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:10.170 }, 00:23:10.170 { 00:23:10.170 "name": "prep_upgrade_on_shutdown", 00:23:10.170 "value": false, 00:23:10.170 "unit": "", 00:23:10.170 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:10.170 } 00:23:10.170 ] 00:23:10.170 } 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:23:10.170 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:10.430 Validate MD5 checksum, iteration 1 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:10.430 20:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:10.430 [2024-09-30 20:06:54.780396] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:10.430 [2024-09-30 20:06:54.780660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78165 ] 00:23:10.688 [2024-09-30 20:06:54.932019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.946 [2024-09-30 20:06:55.117310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.264  Copying: 733/1024 [MB] (733 MBps) Copying: 1024/1024 [MB] (average 715 MBps) 00:23:14.264 00:23:14.264 20:06:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:14.264 20:06:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:16.166 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:16.166 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cb916b8bd6dec5e54a2d26648e2e517b 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cb916b8bd6dec5e54a2d26648e2e517b != \c\b\9\1\6\b\8\b\d\6\d\e\c\5\e\5\4\a\2\d\2\6\6\4\8\e\2\e\5\1\7\b ]] 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:16.167 Validate MD5 checksum, iteration 2 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:16.167 20:07:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:16.167 [2024-09-30 20:07:00.198450] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:16.167 [2024-09-30 20:07:00.198556] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78229 ] 00:23:16.167 [2024-09-30 20:07:00.347314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.167 [2024-09-30 20:07:00.520232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.512  Copying: 657/1024 [MB] (657 MBps) Copying: 1024/1024 [MB] (average 670 MBps) 00:23:22.512 00:23:22.512 20:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:22.512 20:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6625a716351bb00f536855996807f6ad 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6625a716351bb00f536855996807f6ad != \6\6\2\5\a\7\1\6\3\5\1\b\b\0\0\f\5\3\6\8\5\5\9\9\6\8\0\7\f\6\a\d ]] 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78085 ]] 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78085 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78318 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:23.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78318 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78318 ']' 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.888 20:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:23.888 [2024-09-30 20:07:08.101593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:23.888 [2024-09-30 20:07:08.101887] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78318 ] 00:23:23.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78085 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:23.888 [2024-09-30 20:07:08.252092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.147 [2024-09-30 20:07:08.430209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.748 [2024-09-30 20:07:09.063159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:24.748 [2024-09-30 20:07:09.063398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:25.010 [2024-09-30 20:07:09.209038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.209193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:25.010 [2024-09-30 20:07:09.209247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:25.010 [2024-09-30 20:07:09.209275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.209334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.209355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:25.010 [2024-09-30 20:07:09.209371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:23:25.010 [2024-09-30 20:07:09.209386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.209419] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:25.010 [2024-09-30 20:07:09.209941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:25.010 [2024-09-30 20:07:09.210029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.210073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:25.010 [2024-09-30 20:07:09.210092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:23:25.010 [2024-09-30 20:07:09.210110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.210575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:25.010 [2024-09-30 20:07:09.223356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.223467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:25.010 [2024-09-30 20:07:09.223485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.783 ms 00:23:25.010 [2024-09-30 20:07:09.223492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.230290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.230386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:25.010 [2024-09-30 20:07:09.230400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:23:25.010 [2024-09-30 20:07:09.230406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.230672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.230689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:25.010 [2024-09-30 20:07:09.230697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:23:25.010 [2024-09-30 20:07:09.230702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.230741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.230748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:25.010 [2024-09-30 20:07:09.230755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:23:25.010 [2024-09-30 20:07:09.230760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.230783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.230790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:25.010 [2024-09-30 20:07:09.230798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:25.010 [2024-09-30 20:07:09.230804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.230820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:25.010 [2024-09-30 20:07:09.233212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.233236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:25.010 [2024-09-30 20:07:09.233244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.396 ms 00:23:25.010 [2024-09-30 20:07:09.233249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.233294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.233302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:25.010 [2024-09-30 20:07:09.233309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:25.010 [2024-09-30 20:07:09.233314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.233330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:25.010 [2024-09-30 20:07:09.233344] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:25.010 [2024-09-30 20:07:09.233371] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:25.010 [2024-09-30 20:07:09.233383] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:25.010 [2024-09-30 20:07:09.233462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:25.010 [2024-09-30 20:07:09.233471] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:25.010 [2024-09-30 20:07:09.233479] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:25.010 [2024-09-30 20:07:09.233487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:25.010 [2024-09-30 20:07:09.233493] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:25.010 [2024-09-30 20:07:09.233500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:25.010 [2024-09-30 20:07:09.233507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:25.010 [2024-09-30 20:07:09.233513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:25.010 [2024-09-30 20:07:09.233518] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:25.010 [2024-09-30 20:07:09.233524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.233530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:25.010 [2024-09-30 20:07:09.233536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:23:25.010 [2024-09-30 20:07:09.233541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.233606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.010 [2024-09-30 20:07:09.233612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:25.010 [2024-09-30 20:07:09.233618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:23:25.010 [2024-09-30 20:07:09.233625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.010 [2024-09-30 20:07:09.233700] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:25.010 [2024-09-30 20:07:09.233707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:25.010 [2024-09-30 20:07:09.233714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:25.010 [2024-09-30 20:07:09.233720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.010 [2024-09-30 20:07:09.233726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:25.010 [2024-09-30 20:07:09.233731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:25.010 [2024-09-30 20:07:09.233736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:25.010 [2024-09-30 20:07:09.233741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:25.010 [2024-09-30 20:07:09.233748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:25.010 [2024-09-30 20:07:09.233753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.010 [2024-09-30 20:07:09.233758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:25.010 [2024-09-30 20:07:09.233763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:25.010 [2024-09-30 20:07:09.233769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.010 [2024-09-30 20:07:09.233774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:25.010 [2024-09-30 20:07:09.233780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:25.010 [2024-09-30 20:07:09.233785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:25.011 [2024-09-30 20:07:09.233795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:25.011 [2024-09-30 20:07:09.233800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:25.011 [2024-09-30 20:07:09.233811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:25.011 [2024-09-30 20:07:09.233831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:25.011 [2024-09-30 20:07:09.233846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:25.011 [2024-09-30 20:07:09.233861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:25.011 [2024-09-30 20:07:09.233876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:25.011 [2024-09-30 20:07:09.233891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:25.011 [2024-09-30 20:07:09.233906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:25.011 [2024-09-30 20:07:09.233921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:25.011 [2024-09-30 20:07:09.233926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:25.011 [2024-09-30 20:07:09.233941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:25.011 [2024-09-30 20:07:09.233946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:25.011 [2024-09-30 20:07:09.233959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:25.011 [2024-09-30 20:07:09.233965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:25.011 [2024-09-30 20:07:09.233969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:25.011 [2024-09-30 20:07:09.233975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:25.011 [2024-09-30 20:07:09.233980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:25.011 [2024-09-30 20:07:09.233985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:25.011 [2024-09-30 20:07:09.233991] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:25.011 [2024-09-30 20:07:09.233998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:25.011 [2024-09-30 20:07:09.234010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:25.011 [2024-09-30 20:07:09.234026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:25.011 [2024-09-30 20:07:09.234031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:25.011 [2024-09-30 20:07:09.234037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:25.011 [2024-09-30 20:07:09.234042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:25.011 [2024-09-30 20:07:09.234080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:25.011 [2024-09-30 20:07:09.234086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.011 [2024-09-30 20:07:09.234098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:25.011 [2024-09-30 20:07:09.234103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:25.011 [2024-09-30 20:07:09.234109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:25.011 [2024-09-30 20:07:09.234114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.234122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:25.011 [2024-09-30 20:07:09.234128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.466 ms 00:23:25.011 [2024-09-30 20:07:09.234133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.253430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.253534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:25.011 [2024-09-30 20:07:09.253577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.260 ms 00:23:25.011 [2024-09-30 20:07:09.253595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.253636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.253656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:25.011 [2024-09-30 20:07:09.253676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:25.011 [2024-09-30 20:07:09.253691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.292175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.292305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:25.011 [2024-09-30 20:07:09.292353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.433 ms 00:23:25.011 [2024-09-30 20:07:09.292373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.292420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.292439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:25.011 [2024-09-30 20:07:09.292455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:25.011 [2024-09-30 20:07:09.292469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.292559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.292625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:25.011 [2024-09-30 20:07:09.292644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:23:25.011 [2024-09-30 20:07:09.292659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.292704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.292726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:25.011 [2024-09-30 20:07:09.292742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:23:25.011 [2024-09-30 20:07:09.292756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.303761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.303853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:25.011 [2024-09-30 20:07:09.303895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.880 ms 00:23:25.011 [2024-09-30 20:07:09.303913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.304013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.304063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:25.011 [2024-09-30 20:07:09.304083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:25.011 [2024-09-30 20:07:09.304099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.316853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.316948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:25.011 [2024-09-30 20:07:09.316988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.686 ms 00:23:25.011 [2024-09-30 20:07:09.317001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.011 [2024-09-30 20:07:09.324308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.011 [2024-09-30 20:07:09.324390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:25.011 [2024-09-30 20:07:09.324434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:23:25.012 [2024-09-30 20:07:09.324451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.012 [2024-09-30 20:07:09.368028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.012 [2024-09-30 20:07:09.368163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:25.012 [2024-09-30 20:07:09.368208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.525 ms 00:23:25.012 [2024-09-30 20:07:09.368227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.012 [2024-09-30 20:07:09.368589] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:25.012 [2024-09-30 20:07:09.368717] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:25.012 [2024-09-30 20:07:09.368829] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:25.012 [2024-09-30 20:07:09.368925] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:25.012 [2024-09-30 20:07:09.368955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.012 [2024-09-30 20:07:09.369016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:25.012 [2024-09-30 20:07:09.369039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.443 ms 00:23:25.012 [2024-09-30 20:07:09.369122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.012 [2024-09-30 20:07:09.369191] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:25.012 [2024-09-30 20:07:09.369266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.012 [2024-09-30 20:07:09.369292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:25.012 [2024-09-30 20:07:09.369308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:23:25.012 [2024-09-30 20:07:09.369367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.271 [2024-09-30 20:07:09.381122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.271 [2024-09-30 20:07:09.381220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:25.271 [2024-09-30 20:07:09.381261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.722 ms 00:23:25.271 [2024-09-30 20:07:09.381287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.271 [2024-09-30 20:07:09.387860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.271 [2024-09-30 20:07:09.387935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:25.271 [2024-09-30 20:07:09.388025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:25.271 [2024-09-30 20:07:09.388078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.271 [2024-09-30 20:07:09.388158] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:23:25.271 [2024-09-30 20:07:09.388351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.271 [2024-09-30 20:07:09.388415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:25.271 [2024-09-30 20:07:09.388436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:23:25.271 [2024-09-30 20:07:09.388451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.844 [2024-09-30 20:07:09.990924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.844 [2024-09-30 20:07:09.991236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:25.844 [2024-09-30 20:07:09.991298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 601.800 ms 00:23:25.844 [2024-09-30 20:07:09.991315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.844 [2024-09-30 20:07:09.996303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.844 [2024-09-30 20:07:09.996360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:25.844 [2024-09-30 20:07:09.996374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.511 ms 00:23:25.844 [2024-09-30 20:07:09.996383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.844 [2024-09-30 20:07:09.997491] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:23:25.844 [2024-09-30 20:07:09.997676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.844 [2024-09-30 20:07:09.997698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:25.844 [2024-09-30 20:07:09.997715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.255 ms 00:23:25.844 [2024-09-30 20:07:09.997729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.844 [2024-09-30 20:07:09.998073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.844 [2024-09-30 20:07:09.998123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:25.844 [2024-09-30 20:07:09.998143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:25.844 [2024-09-30 20:07:09.998157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:25.844 [2024-09-30 20:07:09.998219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 610.045 ms, result 0 00:23:25.844 [2024-09-30 20:07:09.998264] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:23:25.844 [2024-09-30 20:07:09.998404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:25.844 [2024-09-30 20:07:09.998420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:25.844 [2024-09-30 20:07:09.998448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.142 ms 00:23:25.844 [2024-09-30 20:07:09.998457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.797422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.797530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:26.788 [2024-09-30 20:07:10.797548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 797.713 ms 00:23:26.788 [2024-09-30 20:07:10.797556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.802551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.802602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:26.788 [2024-09-30 20:07:10.802615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.598 ms 00:23:26.788 [2024-09-30 20:07:10.802624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.803730] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:23:26.788 [2024-09-30 20:07:10.803796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.803806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:26.788 [2024-09-30 20:07:10.803817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.137 ms 00:23:26.788 [2024-09-30 20:07:10.803824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.803860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.803870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:26.788 [2024-09-30 20:07:10.803879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:26.788 [2024-09-30 20:07:10.803887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.803925] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 805.656 ms, result 0 00:23:26.788 [2024-09-30 20:07:10.803971] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:26.788 [2024-09-30 20:07:10.803983] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:26.788 [2024-09-30 20:07:10.803993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.804003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:26.788 [2024-09-30 20:07:10.804016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1415.845 ms 00:23:26.788 [2024-09-30 20:07:10.804024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.804057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.804067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:26.788 [2024-09-30 20:07:10.804075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:26.788 [2024-09-30 20:07:10.804084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.816748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:26.788 [2024-09-30 20:07:10.816894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.816906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:26.788 [2024-09-30 20:07:10.816917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.792 ms 00:23:26.788 [2024-09-30 20:07:10.816926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.817718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.817762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:23:26.788 [2024-09-30 20:07:10.817777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:23:26.788 [2024-09-30 20:07:10.817789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:26.788 [2024-09-30 20:07:10.820386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.319 ms 00:23:26.788 [2024-09-30 20:07:10.820396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:23:26.788 [2024-09-30 20:07:10.820501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:26.788 [2024-09-30 20:07:10.820513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:26.788 [2024-09-30 20:07:10.820678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:23:26.788 [2024-09-30 20:07:10.820686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:26.788 [2024-09-30 20:07:10.820728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:26.788 [2024-09-30 20:07:10.820736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:26.788 [2024-09-30 20:07:10.820782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:26.788 [2024-09-30 20:07:10.820800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:26.788 [2024-09-30 20:07:10.820807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.820862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:26.788 [2024-09-30 20:07:10.820876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:26.788 [2024-09-30 20:07:10.820887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:23:26.788 [2024-09-30 20:07:10.820894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:26.788 [2024-09-30 20:07:10.823478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1613.129 ms, result 0 00:23:26.788 [2024-09-30 20:07:10.837543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.788 [2024-09-30 20:07:10.853530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:26.788 [2024-09-30 20:07:10.863291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:26.788 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:26.789 Validate MD5 checksum, iteration 1 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:26.789 20:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:26.789 [2024-09-30 20:07:10.984047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:26.789 [2024-09-30 20:07:10.984364] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78358 ] 00:23:26.789 [2024-09-30 20:07:11.139241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.049 [2024-09-30 20:07:11.329232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.602  Copying: 655/1024 [MB] (655 MBps) Copying: 1024/1024 [MB] (average 636 MBps) 00:23:30.602 00:23:30.602 20:07:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:30.602 20:07:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:33.130 Validate MD5 checksum, iteration 2 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cb916b8bd6dec5e54a2d26648e2e517b 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cb916b8bd6dec5e54a2d26648e2e517b != \c\b\9\1\6\b\8\b\d\6\d\e\c\5\e\5\4\a\2\d\2\6\6\4\8\e\2\e\5\1\7\b ]] 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:33.130 20:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:33.130 [2024-09-30 20:07:16.996818] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:33.130 [2024-09-30 20:07:16.997083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78426 ] 00:23:33.130 [2024-09-30 20:07:17.149055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.130 [2024-09-30 20:07:17.324086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.389  Copying: 530/1024 [MB] (530 MBps) Copying: 1024/1024 [MB] (average 555 MBps) 00:23:36.389 00:23:36.389 20:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:36.389 20:07:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6625a716351bb00f536855996807f6ad 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6625a716351bb00f536855996807f6ad != \6\6\2\5\a\7\1\6\3\5\1\b\b\0\0\f\5\3\6\8\5\5\9\9\6\8\0\7\f\6\a\d ]] 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78318 ]] 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78318 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78318 ']' 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78318 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78318 00:23:38.290 killing process with pid 78318 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78318' 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78318 00:23:38.290 20:07:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78318 00:23:38.862 [2024-09-30 20:07:23.089940] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:38.862 [2024-09-30 20:07:23.104662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.104706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:38.862 [2024-09-30 20:07:23.104720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:38.862 [2024-09-30 20:07:23.104732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.104755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:38.862 [2024-09-30 20:07:23.107679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.107912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:38.862 [2024-09-30 20:07:23.107931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.909 ms 00:23:38.862 [2024-09-30 20:07:23.107940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.108181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.108199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:38.862 [2024-09-30 20:07:23.108209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:23:38.862 [2024-09-30 20:07:23.108217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.109698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.109803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:38.862 [2024-09-30 20:07:23.109831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.457 ms 00:23:38.862 [2024-09-30 20:07:23.109849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.112619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.112670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:38.862 [2024-09-30 20:07:23.112693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.695 ms 00:23:38.862 [2024-09-30 20:07:23.112710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.125289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.125324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:38.862 [2024-09-30 20:07:23.125336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.399 ms 00:23:38.862 [2024-09-30 20:07:23.125345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.130968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.131100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:38.862 [2024-09-30 20:07:23.131116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.585 ms 00:23:38.862 [2024-09-30 20:07:23.131125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.131194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.131203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:38.862 [2024-09-30 20:07:23.131212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:23:38.862 [2024-09-30 20:07:23.131220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.141110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.141141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:38.862 [2024-09-30 20:07:23.141151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.875 ms 00:23:38.862 [2024-09-30 20:07:23.141158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.150992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.151022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:38.862 [2024-09-30 20:07:23.151032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.800 ms 00:23:38.862 [2024-09-30 20:07:23.151038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.160963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.161094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:38.862 [2024-09-30 20:07:23.161110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.890 ms 00:23:38.862 [2024-09-30 20:07:23.161117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.170942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.862 [2024-09-30 20:07:23.170974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:38.862 [2024-09-30 20:07:23.170984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.764 ms 00:23:38.862 [2024-09-30 20:07:23.170991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.862 [2024-09-30 20:07:23.171027] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:38.862 [2024-09-30 20:07:23.171042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:38.862 [2024-09-30 20:07:23.171051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:38.862 [2024-09-30 20:07:23.171060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:38.862 [2024-09-30 20:07:23.171068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.862 [2024-09-30 20:07:23.171076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.863 [2024-09-30 20:07:23.171181] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:38.863 [2024-09-30 20:07:23.171188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a37396da-5ab1-4c37-990c-099c4b12bcca 00:23:38.863 [2024-09-30 20:07:23.171196] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:38.863 [2024-09-30 20:07:23.171203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:38.863 [2024-09-30 20:07:23.171210] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:38.863 [2024-09-30 20:07:23.171222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:38.863 [2024-09-30 20:07:23.171229] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:38.863 [2024-09-30 20:07:23.171237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:38.863 [2024-09-30 20:07:23.171245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:38.863 [2024-09-30 20:07:23.171252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:38.863 [2024-09-30 20:07:23.171260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:38.863 [2024-09-30 20:07:23.171286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.863 [2024-09-30 20:07:23.171295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:38.863 [2024-09-30 20:07:23.171304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:23:38.863 [2024-09-30 20:07:23.171312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.184314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.863 [2024-09-30 20:07:23.184355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:38.863 [2024-09-30 20:07:23.184367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.985 ms 00:23:38.863 [2024-09-30 20:07:23.184374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.184732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:38.863 [2024-09-30 20:07:23.184747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:38.863 [2024-09-30 20:07:23.184756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:23:38.863 [2024-09-30 20:07:23.184764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.225031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:38.863 [2024-09-30 20:07:23.225080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:38.863 [2024-09-30 20:07:23.225092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:38.863 [2024-09-30 20:07:23.225100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.225138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:38.863 [2024-09-30 20:07:23.225148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:38.863 [2024-09-30 20:07:23.225156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:38.863 [2024-09-30 20:07:23.225165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.225298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:38.863 [2024-09-30 20:07:23.225310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:38.863 [2024-09-30 20:07:23.225326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:38.863 [2024-09-30 20:07:23.225334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:38.863 [2024-09-30 20:07:23.225352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:38.863 [2024-09-30 20:07:23.225361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:38.863 [2024-09-30 20:07:23.225370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:38.863 [2024-09-30 20:07:23.225378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.311492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.311558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:39.124 [2024-09-30 20:07:23.311572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.311580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.381497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.381552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:39.124 [2024-09-30 20:07:23.381565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.381574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.381656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.381667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:39.124 [2024-09-30 20:07:23.381676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.381692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.381756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.381772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:39.124 [2024-09-30 20:07:23.381781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.381790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.381890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.381902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:39.124 [2024-09-30 20:07:23.381910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.381922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.381966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.381976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:39.124 [2024-09-30 20:07:23.381984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.381992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.382038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.382049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:39.124 [2024-09-30 20:07:23.382057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.382065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.382122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:39.124 [2024-09-30 20:07:23.382135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:39.124 [2024-09-30 20:07:23.382144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:39.124 [2024-09-30 20:07:23.382153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:39.124 [2024-09-30 20:07:23.382326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 277.592 ms, result 0 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:40.067 Remove shared memory files 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78085 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:40.067 20:07:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:40.328 ************************************ 00:23:40.328 END TEST ftl_upgrade_shutdown 00:23:40.328 ************************************ 00:23:40.328 00:23:40.328 real 1m21.902s 00:23:40.328 user 1m53.559s 00:23:40.328 sys 0m18.407s 00:23:40.328 20:07:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:40.328 20:07:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:40.328 20:07:24 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:40.328 20:07:24 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:40.328 20:07:24 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:40.328 20:07:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:40.328 20:07:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:40.328 ************************************ 00:23:40.328 START TEST ftl_restore_fast 00:23:40.328 ************************************ 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:40.328 * Looking for test storage... 00:23:40.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.328 --rc genhtml_branch_coverage=1 00:23:40.328 --rc genhtml_function_coverage=1 00:23:40.328 --rc genhtml_legend=1 00:23:40.328 --rc geninfo_all_blocks=1 00:23:40.328 --rc geninfo_unexecuted_blocks=1 00:23:40.328 00:23:40.328 ' 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.328 --rc genhtml_branch_coverage=1 00:23:40.328 --rc genhtml_function_coverage=1 00:23:40.328 --rc genhtml_legend=1 00:23:40.328 --rc geninfo_all_blocks=1 00:23:40.328 --rc geninfo_unexecuted_blocks=1 00:23:40.328 00:23:40.328 ' 00:23:40.328 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:40.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.329 --rc genhtml_branch_coverage=1 00:23:40.329 --rc genhtml_function_coverage=1 00:23:40.329 --rc genhtml_legend=1 00:23:40.329 --rc geninfo_all_blocks=1 00:23:40.329 --rc geninfo_unexecuted_blocks=1 00:23:40.329 00:23:40.329 ' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:40.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.329 --rc genhtml_branch_coverage=1 00:23:40.329 --rc genhtml_function_coverage=1 00:23:40.329 --rc genhtml_legend=1 00:23:40.329 --rc geninfo_all_blocks=1 00:23:40.329 --rc geninfo_unexecuted_blocks=1 00:23:40.329 00:23:40.329 ' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.329uUFvVNM 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78577 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78577 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78577 ']' 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.329 20:07:24 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:40.589 [2024-09-30 20:07:24.754375] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:40.589 [2024-09-30 20:07:24.754652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78577 ] 00:23:40.589 [2024-09-30 20:07:24.909593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.850 [2024-09-30 20:07:25.051396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:41.421 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:41.422 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:41.422 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:41.683 20:07:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:41.683 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:41.683 { 00:23:41.683 "name": "nvme0n1", 00:23:41.683 "aliases": [ 00:23:41.683 "f3df3fec-261d-4ebb-bfb6-6c1b70c2c212" 00:23:41.683 ], 00:23:41.683 "product_name": "NVMe disk", 00:23:41.683 "block_size": 4096, 00:23:41.683 "num_blocks": 1310720, 00:23:41.683 "uuid": "f3df3fec-261d-4ebb-bfb6-6c1b70c2c212", 00:23:41.683 "numa_id": -1, 00:23:41.683 "assigned_rate_limits": { 00:23:41.683 "rw_ios_per_sec": 0, 00:23:41.683 "rw_mbytes_per_sec": 0, 00:23:41.683 "r_mbytes_per_sec": 0, 00:23:41.683 "w_mbytes_per_sec": 0 00:23:41.683 }, 00:23:41.683 "claimed": true, 00:23:41.683 "claim_type": "read_many_write_one", 00:23:41.683 "zoned": false, 00:23:41.683 "supported_io_types": { 00:23:41.683 "read": true, 00:23:41.683 "write": true, 00:23:41.683 "unmap": true, 00:23:41.683 "flush": true, 00:23:41.683 "reset": true, 00:23:41.683 "nvme_admin": true, 00:23:41.683 "nvme_io": true, 00:23:41.683 "nvme_io_md": false, 00:23:41.683 "write_zeroes": true, 00:23:41.683 "zcopy": false, 00:23:41.683 "get_zone_info": false, 00:23:41.683 "zone_management": false, 00:23:41.683 "zone_append": false, 00:23:41.683 "compare": true, 00:23:41.683 "compare_and_write": false, 00:23:41.683 "abort": true, 00:23:41.683 "seek_hole": false, 00:23:41.683 "seek_data": false, 00:23:41.683 "copy": true, 00:23:41.683 "nvme_iov_md": false 00:23:41.683 }, 00:23:41.683 "driver_specific": { 00:23:41.683 "nvme": [ 00:23:41.683 { 00:23:41.683 "pci_address": "0000:00:11.0", 00:23:41.683 "trid": { 00:23:41.683 "trtype": "PCIe", 00:23:41.683 "traddr": "0000:00:11.0" 00:23:41.683 }, 00:23:41.683 "ctrlr_data": { 00:23:41.683 "cntlid": 0, 00:23:41.683 "vendor_id": "0x1b36", 00:23:41.683 "model_number": "QEMU NVMe Ctrl", 00:23:41.683 "serial_number": "12341", 00:23:41.683 "firmware_revision": "8.0.0", 00:23:41.683 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:41.683 "oacs": { 00:23:41.683 "security": 0, 00:23:41.683 "format": 1, 00:23:41.683 "firmware": 0, 00:23:41.683 "ns_manage": 1 00:23:41.683 }, 00:23:41.683 "multi_ctrlr": false, 00:23:41.683 "ana_reporting": false 00:23:41.683 }, 00:23:41.683 "vs": { 00:23:41.683 "nvme_version": "1.4" 00:23:41.683 }, 00:23:41.683 "ns_data": { 00:23:41.683 "id": 1, 00:23:41.683 "can_share": false 00:23:41.683 } 00:23:41.683 } 00:23:41.683 ], 00:23:41.683 "mp_policy": "active_passive" 00:23:41.683 } 00:23:41.683 } 00:23:41.683 ]' 00:23:41.683 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=b8b0bb17-bacf-45c1-bad0-b06ab108b40c 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:41.944 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8b0bb17-bacf-45c1-bad0-b06ab108b40c 00:23:42.205 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:42.467 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=b3f35580-98f9-4e37-82d3-0047165802dd 00:23:42.467 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b3f35580-98f9-4e37-82d3-0047165802dd 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:42.728 20:07:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:42.990 { 00:23:42.990 "name": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:42.990 "aliases": [ 00:23:42.990 "lvs/nvme0n1p0" 00:23:42.990 ], 00:23:42.990 "product_name": "Logical Volume", 00:23:42.990 "block_size": 4096, 00:23:42.990 "num_blocks": 26476544, 00:23:42.990 "uuid": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:42.990 "assigned_rate_limits": { 00:23:42.990 "rw_ios_per_sec": 0, 00:23:42.990 "rw_mbytes_per_sec": 0, 00:23:42.990 "r_mbytes_per_sec": 0, 00:23:42.990 "w_mbytes_per_sec": 0 00:23:42.990 }, 00:23:42.990 "claimed": false, 00:23:42.990 "zoned": false, 00:23:42.990 "supported_io_types": { 00:23:42.990 "read": true, 00:23:42.990 "write": true, 00:23:42.990 "unmap": true, 00:23:42.990 "flush": false, 00:23:42.990 "reset": true, 00:23:42.990 "nvme_admin": false, 00:23:42.990 "nvme_io": false, 00:23:42.990 "nvme_io_md": false, 00:23:42.990 "write_zeroes": true, 00:23:42.990 "zcopy": false, 00:23:42.990 "get_zone_info": false, 00:23:42.990 "zone_management": false, 00:23:42.990 "zone_append": false, 00:23:42.990 "compare": false, 00:23:42.990 "compare_and_write": false, 00:23:42.990 "abort": false, 00:23:42.990 "seek_hole": true, 00:23:42.990 "seek_data": true, 00:23:42.990 "copy": false, 00:23:42.990 "nvme_iov_md": false 00:23:42.990 }, 00:23:42.990 "driver_specific": { 00:23:42.990 "lvol": { 00:23:42.990 "lvol_store_uuid": "b3f35580-98f9-4e37-82d3-0047165802dd", 00:23:42.990 "base_bdev": "nvme0n1", 00:23:42.990 "thin_provision": true, 00:23:42.990 "num_allocated_clusters": 0, 00:23:42.990 "snapshot": false, 00:23:42.990 "clone": false, 00:23:42.990 "esnap_clone": false 00:23:42.990 } 00:23:42.990 } 00:23:42.990 } 00:23:42.990 ]' 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:42.990 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:43.251 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.512 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:43.512 { 00:23:43.512 "name": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:43.512 "aliases": [ 00:23:43.512 "lvs/nvme0n1p0" 00:23:43.512 ], 00:23:43.512 "product_name": "Logical Volume", 00:23:43.512 "block_size": 4096, 00:23:43.512 "num_blocks": 26476544, 00:23:43.513 "uuid": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:43.513 "assigned_rate_limits": { 00:23:43.513 "rw_ios_per_sec": 0, 00:23:43.513 "rw_mbytes_per_sec": 0, 00:23:43.513 "r_mbytes_per_sec": 0, 00:23:43.513 "w_mbytes_per_sec": 0 00:23:43.513 }, 00:23:43.513 "claimed": false, 00:23:43.513 "zoned": false, 00:23:43.513 "supported_io_types": { 00:23:43.513 "read": true, 00:23:43.513 "write": true, 00:23:43.513 "unmap": true, 00:23:43.513 "flush": false, 00:23:43.513 "reset": true, 00:23:43.513 "nvme_admin": false, 00:23:43.513 "nvme_io": false, 00:23:43.513 "nvme_io_md": false, 00:23:43.513 "write_zeroes": true, 00:23:43.513 "zcopy": false, 00:23:43.513 "get_zone_info": false, 00:23:43.513 "zone_management": false, 00:23:43.513 "zone_append": false, 00:23:43.513 "compare": false, 00:23:43.513 "compare_and_write": false, 00:23:43.513 "abort": false, 00:23:43.513 "seek_hole": true, 00:23:43.513 "seek_data": true, 00:23:43.513 "copy": false, 00:23:43.513 "nvme_iov_md": false 00:23:43.513 }, 00:23:43.513 "driver_specific": { 00:23:43.513 "lvol": { 00:23:43.513 "lvol_store_uuid": "b3f35580-98f9-4e37-82d3-0047165802dd", 00:23:43.513 "base_bdev": "nvme0n1", 00:23:43.513 "thin_provision": true, 00:23:43.513 "num_allocated_clusters": 0, 00:23:43.513 "snapshot": false, 00:23:43.513 "clone": false, 00:23:43.513 "esnap_clone": false 00:23:43.513 } 00:23:43.513 } 00:23:43.513 } 00:23:43.513 ]' 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:43.513 20:07:27 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:43.774 20:07:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1869c117-7561-44fd-9cb6-a925eca1070e 00:23:43.774 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:43.774 { 00:23:43.774 "name": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:43.774 "aliases": [ 00:23:43.774 "lvs/nvme0n1p0" 00:23:43.774 ], 00:23:43.774 "product_name": "Logical Volume", 00:23:43.774 "block_size": 4096, 00:23:43.774 "num_blocks": 26476544, 00:23:43.774 "uuid": "1869c117-7561-44fd-9cb6-a925eca1070e", 00:23:43.774 "assigned_rate_limits": { 00:23:43.774 "rw_ios_per_sec": 0, 00:23:43.774 "rw_mbytes_per_sec": 0, 00:23:43.774 "r_mbytes_per_sec": 0, 00:23:43.774 "w_mbytes_per_sec": 0 00:23:43.774 }, 00:23:43.774 "claimed": false, 00:23:43.774 "zoned": false, 00:23:43.774 "supported_io_types": { 00:23:43.774 "read": true, 00:23:43.774 "write": true, 00:23:43.774 "unmap": true, 00:23:43.774 "flush": false, 00:23:43.774 "reset": true, 00:23:43.774 "nvme_admin": false, 00:23:43.774 "nvme_io": false, 00:23:43.774 "nvme_io_md": false, 00:23:43.774 "write_zeroes": true, 00:23:43.774 "zcopy": false, 00:23:43.774 "get_zone_info": false, 00:23:43.774 "zone_management": false, 00:23:43.774 "zone_append": false, 00:23:43.774 "compare": false, 00:23:43.774 "compare_and_write": false, 00:23:43.774 "abort": false, 00:23:43.774 "seek_hole": true, 00:23:43.774 "seek_data": true, 00:23:43.774 "copy": false, 00:23:43.774 "nvme_iov_md": false 00:23:43.774 }, 00:23:43.774 "driver_specific": { 00:23:43.774 "lvol": { 00:23:43.774 "lvol_store_uuid": "b3f35580-98f9-4e37-82d3-0047165802dd", 00:23:43.774 "base_bdev": "nvme0n1", 00:23:43.774 "thin_provision": true, 00:23:43.774 "num_allocated_clusters": 0, 00:23:43.774 "snapshot": false, 00:23:43.774 "clone": false, 00:23:43.774 "esnap_clone": false 00:23:43.774 } 00:23:43.774 } 00:23:43.774 } 00:23:43.774 ]' 00:23:43.774 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1869c117-7561-44fd-9cb6-a925eca1070e --l2p_dram_limit 10' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:44.036 20:07:28 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1869c117-7561-44fd-9cb6-a925eca1070e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:44.036 [2024-09-30 20:07:28.376252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.036 [2024-09-30 20:07:28.376301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:44.036 [2024-09-30 20:07:28.376314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:44.036 [2024-09-30 20:07:28.376320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.036 [2024-09-30 20:07:28.376362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.036 [2024-09-30 20:07:28.376370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:44.036 [2024-09-30 20:07:28.376378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:44.036 [2024-09-30 20:07:28.376384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.036 [2024-09-30 20:07:28.376404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:44.036 [2024-09-30 20:07:28.376980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:44.036 [2024-09-30 20:07:28.376998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.036 [2024-09-30 20:07:28.377005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:44.036 [2024-09-30 20:07:28.377013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:23:44.036 [2024-09-30 20:07:28.377020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.377046] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:23:44.037 [2024-09-30 20:07:28.378017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.378047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:44.037 [2024-09-30 20:07:28.378055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:44.037 [2024-09-30 20:07:28.378062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.382929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.382958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:44.037 [2024-09-30 20:07:28.382966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:23:44.037 [2024-09-30 20:07:28.382973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.383076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.383085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:44.037 [2024-09-30 20:07:28.383092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:44.037 [2024-09-30 20:07:28.383104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.383134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.383143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:44.037 [2024-09-30 20:07:28.383149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:44.037 [2024-09-30 20:07:28.383156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.383173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.037 [2024-09-30 20:07:28.386094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.386119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:44.037 [2024-09-30 20:07:28.386129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:23:44.037 [2024-09-30 20:07:28.386134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.386161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.386168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:44.037 [2024-09-30 20:07:28.386175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:44.037 [2024-09-30 20:07:28.386182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.386201] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:44.037 [2024-09-30 20:07:28.386317] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:44.037 [2024-09-30 20:07:28.386329] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:44.037 [2024-09-30 20:07:28.386338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:44.037 [2024-09-30 20:07:28.386349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386356] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:44.037 [2024-09-30 20:07:28.386369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:44.037 [2024-09-30 20:07:28.386376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:44.037 [2024-09-30 20:07:28.386381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:44.037 [2024-09-30 20:07:28.386388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.386399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:44.037 [2024-09-30 20:07:28.386406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:23:44.037 [2024-09-30 20:07:28.386412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.386489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.037 [2024-09-30 20:07:28.386497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:44.037 [2024-09-30 20:07:28.386504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:44.037 [2024-09-30 20:07:28.386510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.037 [2024-09-30 20:07:28.386585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:44.037 [2024-09-30 20:07:28.386592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:44.037 [2024-09-30 20:07:28.386599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:44.037 [2024-09-30 20:07:28.386617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:44.037 [2024-09-30 20:07:28.386635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.037 [2024-09-30 20:07:28.386647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:44.037 [2024-09-30 20:07:28.386653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:44.037 [2024-09-30 20:07:28.386659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.037 [2024-09-30 20:07:28.386664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:44.037 [2024-09-30 20:07:28.386671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:44.037 [2024-09-30 20:07:28.386675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:44.037 [2024-09-30 20:07:28.386689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:44.037 [2024-09-30 20:07:28.386709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:44.037 [2024-09-30 20:07:28.386726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:44.037 [2024-09-30 20:07:28.386743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:44.037 [2024-09-30 20:07:28.386760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:44.037 [2024-09-30 20:07:28.386779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.037 [2024-09-30 20:07:28.386790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:44.037 [2024-09-30 20:07:28.386795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:44.037 [2024-09-30 20:07:28.386801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.037 [2024-09-30 20:07:28.386806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:44.037 [2024-09-30 20:07:28.386813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:44.037 [2024-09-30 20:07:28.386817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:44.037 [2024-09-30 20:07:28.386828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:44.037 [2024-09-30 20:07:28.386835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:44.037 [2024-09-30 20:07:28.386847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:44.037 [2024-09-30 20:07:28.386854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.037 [2024-09-30 20:07:28.386868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:44.037 [2024-09-30 20:07:28.386875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:44.037 [2024-09-30 20:07:28.386880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:44.037 [2024-09-30 20:07:28.386888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:44.037 [2024-09-30 20:07:28.386892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:44.037 [2024-09-30 20:07:28.386899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:44.037 [2024-09-30 20:07:28.386906] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:44.037 [2024-09-30 20:07:28.386915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.037 [2024-09-30 20:07:28.386921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:44.037 [2024-09-30 20:07:28.386928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:44.037 [2024-09-30 20:07:28.386934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:44.037 [2024-09-30 20:07:28.386941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:44.037 [2024-09-30 20:07:28.386946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:44.038 [2024-09-30 20:07:28.386953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:44.038 [2024-09-30 20:07:28.386958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:44.038 [2024-09-30 20:07:28.386965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:44.038 [2024-09-30 20:07:28.386970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:44.038 [2024-09-30 20:07:28.386979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.386984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.386991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.386996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.387003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:44.038 [2024-09-30 20:07:28.387008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:44.038 [2024-09-30 20:07:28.387016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.387022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.038 [2024-09-30 20:07:28.387030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:44.038 [2024-09-30 20:07:28.387035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:44.038 [2024-09-30 20:07:28.387043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:44.038 [2024-09-30 20:07:28.387048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.038 [2024-09-30 20:07:28.387055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:44.038 [2024-09-30 20:07:28.387060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:23:44.038 [2024-09-30 20:07:28.387067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.038 [2024-09-30 20:07:28.387108] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:44.038 [2024-09-30 20:07:28.387119] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:47.343 [2024-09-30 20:07:31.334527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.334729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:47.343 [2024-09-30 20:07:31.334782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2947.405 ms 00:23:47.343 [2024-09-30 20:07:31.334805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.355529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.355661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.343 [2024-09-30 20:07:31.355710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.546 ms 00:23:47.343 [2024-09-30 20:07:31.355730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.355840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.355863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.343 [2024-09-30 20:07:31.355880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:47.343 [2024-09-30 20:07:31.355899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.388393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.388522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.343 [2024-09-30 20:07:31.388575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.457 ms 00:23:47.343 [2024-09-30 20:07:31.388595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.388635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.388653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.343 [2024-09-30 20:07:31.388670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:47.343 [2024-09-30 20:07:31.388692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.389019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.389140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.343 [2024-09-30 20:07:31.389189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:47.343 [2024-09-30 20:07:31.389210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.389341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.389397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.343 [2024-09-30 20:07:31.389439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:47.343 [2024-09-30 20:07:31.389458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.406422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.406526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.343 [2024-09-30 20:07:31.406567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.940 ms 00:23:47.343 [2024-09-30 20:07:31.406586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.415494] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:47.343 [2024-09-30 20:07:31.417836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.417917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.343 [2024-09-30 20:07:31.417933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.180 ms 00:23:47.343 [2024-09-30 20:07:31.417939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.479814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.479848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:47.343 [2024-09-30 20:07:31.479863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.852 ms 00:23:47.343 [2024-09-30 20:07:31.479870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.479999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.480007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.343 [2024-09-30 20:07:31.480017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:47.343 [2024-09-30 20:07:31.480024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.498891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.498920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:47.343 [2024-09-30 20:07:31.498931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.831 ms 00:23:47.343 [2024-09-30 20:07:31.498937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.516724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.516749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:47.343 [2024-09-30 20:07:31.516759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.767 ms 00:23:47.343 [2024-09-30 20:07:31.516766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.517195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.517204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:47.343 [2024-09-30 20:07:31.517211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:47.343 [2024-09-30 20:07:31.517217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.579706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.579734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:47.343 [2024-09-30 20:07:31.579747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.453 ms 00:23:47.343 [2024-09-30 20:07:31.579756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.599538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.599572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:47.343 [2024-09-30 20:07:31.599583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.724 ms 00:23:47.343 [2024-09-30 20:07:31.599589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.618351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.618496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:47.343 [2024-09-30 20:07:31.618513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:23:47.343 [2024-09-30 20:07:31.618519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.637224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.637250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:47.343 [2024-09-30 20:07:31.637260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.677 ms 00:23:47.343 [2024-09-30 20:07:31.637275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.637309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.637316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:47.343 [2024-09-30 20:07:31.637328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:47.343 [2024-09-30 20:07:31.637333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.637394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.343 [2024-09-30 20:07:31.637402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:47.343 [2024-09-30 20:07:31.637426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:47.343 [2024-09-30 20:07:31.637433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.343 [2024-09-30 20:07:31.638141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3261.553 ms, result 0 00:23:47.343 { 00:23:47.343 "name": "ftl0", 00:23:47.343 "uuid": "1cf7d4fe-8411-4a55-85a0-d1b8ab869310" 00:23:47.343 } 00:23:47.343 20:07:31 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:47.343 20:07:31 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:47.604 20:07:31 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:47.604 20:07:31 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:47.866 [2024-09-30 20:07:32.037796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.037835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:47.866 [2024-09-30 20:07:32.037845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:47.866 [2024-09-30 20:07:32.037854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.037871] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:47.866 [2024-09-30 20:07:32.040006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.040032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:47.866 [2024-09-30 20:07:32.040049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:23:47.866 [2024-09-30 20:07:32.040056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.040278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.040287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:47.866 [2024-09-30 20:07:32.040296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:23:47.866 [2024-09-30 20:07:32.040302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.042756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.042773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:47.866 [2024-09-30 20:07:32.042781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:23:47.866 [2024-09-30 20:07:32.042789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.047471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.047606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:47.866 [2024-09-30 20:07:32.047621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:23:47.866 [2024-09-30 20:07:32.047627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.066624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.066732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:47.866 [2024-09-30 20:07:32.066749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.952 ms 00:23:47.866 [2024-09-30 20:07:32.066755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.079623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.079649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:47.866 [2024-09-30 20:07:32.079661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.835 ms 00:23:47.866 [2024-09-30 20:07:32.079668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.079772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.079782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:47.866 [2024-09-30 20:07:32.079790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:47.866 [2024-09-30 20:07:32.079796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.098016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.098116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:47.866 [2024-09-30 20:07:32.098132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.204 ms 00:23:47.866 [2024-09-30 20:07:32.098138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.866 [2024-09-30 20:07:32.116221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.866 [2024-09-30 20:07:32.116246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:47.867 [2024-09-30 20:07:32.116256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.056 ms 00:23:47.867 [2024-09-30 20:07:32.116262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.867 [2024-09-30 20:07:32.134037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.867 [2024-09-30 20:07:32.134060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:47.867 [2024-09-30 20:07:32.134070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.733 ms 00:23:47.867 [2024-09-30 20:07:32.134075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.867 [2024-09-30 20:07:32.151650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.867 [2024-09-30 20:07:32.151745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:47.867 [2024-09-30 20:07:32.151760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.516 ms 00:23:47.867 [2024-09-30 20:07:32.151765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.867 [2024-09-30 20:07:32.151791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:47.867 [2024-09-30 20:07:32.151801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.151996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:47.867 [2024-09-30 20:07:32.152232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:47.868 [2024-09-30 20:07:32.152484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:47.868 [2024-09-30 20:07:32.152493] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:23:47.868 [2024-09-30 20:07:32.152500] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:47.868 [2024-09-30 20:07:32.152509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:47.868 [2024-09-30 20:07:32.152514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:47.868 [2024-09-30 20:07:32.152521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:47.868 [2024-09-30 20:07:32.152526] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:47.868 [2024-09-30 20:07:32.152533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:47.868 [2024-09-30 20:07:32.152540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:47.868 [2024-09-30 20:07:32.152546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:47.868 [2024-09-30 20:07:32.152551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:47.868 [2024-09-30 20:07:32.152558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.868 [2024-09-30 20:07:32.152564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:47.868 [2024-09-30 20:07:32.152572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:23:47.868 [2024-09-30 20:07:32.152577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.162261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.868 [2024-09-30 20:07:32.162294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:47.868 [2024-09-30 20:07:32.162304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.657 ms 00:23:47.868 [2024-09-30 20:07:32.162309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.162592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.868 [2024-09-30 20:07:32.162601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:47.868 [2024-09-30 20:07:32.162608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:23:47.868 [2024-09-30 20:07:32.162614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.191824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.868 [2024-09-30 20:07:32.191848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.868 [2024-09-30 20:07:32.191858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.868 [2024-09-30 20:07:32.191866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.191910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.868 [2024-09-30 20:07:32.191916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.868 [2024-09-30 20:07:32.191924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.868 [2024-09-30 20:07:32.191930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.191979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.868 [2024-09-30 20:07:32.191986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.868 [2024-09-30 20:07:32.191993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.868 [2024-09-30 20:07:32.191999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.868 [2024-09-30 20:07:32.192017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:47.868 [2024-09-30 20:07:32.192023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.868 [2024-09-30 20:07:32.192030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:47.868 [2024-09-30 20:07:32.192036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.250313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.250347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.130 [2024-09-30 20:07:32.250357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.250363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.130 [2024-09-30 20:07:32.298307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:48.130 [2024-09-30 20:07:32.298405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:48.130 [2024-09-30 20:07:32.298473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:48.130 [2024-09-30 20:07:32.298568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:48.130 [2024-09-30 20:07:32.298615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:48.130 [2024-09-30 20:07:32.298664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.130 [2024-09-30 20:07:32.298715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:48.130 [2024-09-30 20:07:32.298722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.130 [2024-09-30 20:07:32.298728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.130 [2024-09-30 20:07:32.298825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 261.003 ms, result 0 00:23:48.130 true 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78577 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78577 ']' 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78577 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78577 00:23:48.130 killing process with pid 78577 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78577' 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78577 00:23:48.130 20:07:32 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78577 00:23:54.746 20:07:37 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:58.036 262144+0 records in 00:23:58.036 262144+0 records out 00:23:58.036 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.84201 s, 279 MB/s 00:23:58.036 20:07:41 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:59.414 20:07:43 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:59.414 [2024-09-30 20:07:43.512309] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:59.414 [2024-09-30 20:07:43.512412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78804 ] 00:23:59.414 [2024-09-30 20:07:43.657975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.675 [2024-09-30 20:07:43.912541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.935 [2024-09-30 20:07:44.240776] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.935 [2024-09-30 20:07:44.240860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:00.197 [2024-09-30 20:07:44.405508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.405569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:00.197 [2024-09-30 20:07:44.405587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:00.197 [2024-09-30 20:07:44.405602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.405664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.405675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.197 [2024-09-30 20:07:44.405685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:00.197 [2024-09-30 20:07:44.405694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.405716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:00.197 [2024-09-30 20:07:44.406439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:00.197 [2024-09-30 20:07:44.406470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.406480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.197 [2024-09-30 20:07:44.406521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:24:00.197 [2024-09-30 20:07:44.406530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.408874] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:00.197 [2024-09-30 20:07:44.423833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.424031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:00.197 [2024-09-30 20:07:44.424054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.961 ms 00:24:00.197 [2024-09-30 20:07:44.424064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.424214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.424240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:00.197 [2024-09-30 20:07:44.424251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:00.197 [2024-09-30 20:07:44.424259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.435565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.435607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.197 [2024-09-30 20:07:44.435619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:24:00.197 [2024-09-30 20:07:44.435627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.435715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.435725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.197 [2024-09-30 20:07:44.435735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:00.197 [2024-09-30 20:07:44.435744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.435805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.435817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:00.197 [2024-09-30 20:07:44.435826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:00.197 [2024-09-30 20:07:44.435835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.435859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:00.197 [2024-09-30 20:07:44.440423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.440460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.197 [2024-09-30 20:07:44.440471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:24:00.197 [2024-09-30 20:07:44.440479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.440516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.197 [2024-09-30 20:07:44.440525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:00.197 [2024-09-30 20:07:44.440534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:00.197 [2024-09-30 20:07:44.440543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.197 [2024-09-30 20:07:44.440585] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:00.197 [2024-09-30 20:07:44.440611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:00.197 [2024-09-30 20:07:44.440652] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:00.197 [2024-09-30 20:07:44.440670] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:00.197 [2024-09-30 20:07:44.440781] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:00.197 [2024-09-30 20:07:44.440793] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:00.197 [2024-09-30 20:07:44.440805] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:00.197 [2024-09-30 20:07:44.440820] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:00.197 [2024-09-30 20:07:44.440830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:00.197 [2024-09-30 20:07:44.440839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:00.197 [2024-09-30 20:07:44.440848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:00.197 [2024-09-30 20:07:44.440856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:00.197 [2024-09-30 20:07:44.440865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:00.198 [2024-09-30 20:07:44.440874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.440883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:00.198 [2024-09-30 20:07:44.440892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:24:00.198 [2024-09-30 20:07:44.440899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.440982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.440995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:00.198 [2024-09-30 20:07:44.441003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:00.198 [2024-09-30 20:07:44.441010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.441115] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:00.198 [2024-09-30 20:07:44.441126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:00.198 [2024-09-30 20:07:44.441135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:00.198 [2024-09-30 20:07:44.441160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:00.198 [2024-09-30 20:07:44.441184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.198 [2024-09-30 20:07:44.441199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:00.198 [2024-09-30 20:07:44.441206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:00.198 [2024-09-30 20:07:44.441215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.198 [2024-09-30 20:07:44.441229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:00.198 [2024-09-30 20:07:44.441238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:00.198 [2024-09-30 20:07:44.441245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:00.198 [2024-09-30 20:07:44.441261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:00.198 [2024-09-30 20:07:44.441312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:00.198 [2024-09-30 20:07:44.441334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:00.198 [2024-09-30 20:07:44.441357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:00.198 [2024-09-30 20:07:44.441378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:00.198 [2024-09-30 20:07:44.441400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.198 [2024-09-30 20:07:44.441416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:00.198 [2024-09-30 20:07:44.441423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:00.198 [2024-09-30 20:07:44.441430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.198 [2024-09-30 20:07:44.441437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:00.198 [2024-09-30 20:07:44.441444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:00.198 [2024-09-30 20:07:44.441452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:00.198 [2024-09-30 20:07:44.441466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:00.198 [2024-09-30 20:07:44.441474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441481] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:00.198 [2024-09-30 20:07:44.441493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:00.198 [2024-09-30 20:07:44.441505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.198 [2024-09-30 20:07:44.441522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:00.198 [2024-09-30 20:07:44.441529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:00.198 [2024-09-30 20:07:44.441536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:00.198 [2024-09-30 20:07:44.441544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:00.198 [2024-09-30 20:07:44.441551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:00.198 [2024-09-30 20:07:44.441558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:00.198 [2024-09-30 20:07:44.441567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:00.198 [2024-09-30 20:07:44.441577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:00.198 [2024-09-30 20:07:44.441593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:00.198 [2024-09-30 20:07:44.441602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:00.198 [2024-09-30 20:07:44.441609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:00.198 [2024-09-30 20:07:44.441617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:00.198 [2024-09-30 20:07:44.441624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:00.198 [2024-09-30 20:07:44.441632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:00.198 [2024-09-30 20:07:44.441638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:00.198 [2024-09-30 20:07:44.441646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:00.198 [2024-09-30 20:07:44.441655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:00.198 [2024-09-30 20:07:44.441707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:00.198 [2024-09-30 20:07:44.441715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:00.198 [2024-09-30 20:07:44.441732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:00.198 [2024-09-30 20:07:44.441740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:00.198 [2024-09-30 20:07:44.441747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:00.198 [2024-09-30 20:07:44.441754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.441763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:00.198 [2024-09-30 20:07:44.441772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:24:00.198 [2024-09-30 20:07:44.441781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.489495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.489553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.198 [2024-09-30 20:07:44.489568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.660 ms 00:24:00.198 [2024-09-30 20:07:44.489579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.489693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.489706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:00.198 [2024-09-30 20:07:44.489717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:00.198 [2024-09-30 20:07:44.489726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.529106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.529152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.198 [2024-09-30 20:07:44.529168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.310 ms 00:24:00.198 [2024-09-30 20:07:44.529177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.198 [2024-09-30 20:07:44.529218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.198 [2024-09-30 20:07:44.529228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.199 [2024-09-30 20:07:44.529237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:00.199 [2024-09-30 20:07:44.529246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.199 [2024-09-30 20:07:44.529996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.199 [2024-09-30 20:07:44.530026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.199 [2024-09-30 20:07:44.530038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:24:00.199 [2024-09-30 20:07:44.530053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.199 [2024-09-30 20:07:44.530223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.199 [2024-09-30 20:07:44.530234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.199 [2024-09-30 20:07:44.530245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:24:00.199 [2024-09-30 20:07:44.530253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.199 [2024-09-30 20:07:44.546869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.199 [2024-09-30 20:07:44.546902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.199 [2024-09-30 20:07:44.546913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.576 ms 00:24:00.199 [2024-09-30 20:07:44.546921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.562304] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:00.461 [2024-09-30 20:07:44.562345] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:00.461 [2024-09-30 20:07:44.562358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.562367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:00.461 [2024-09-30 20:07:44.562377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.327 ms 00:24:00.461 [2024-09-30 20:07:44.562385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.588751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.588790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:00.461 [2024-09-30 20:07:44.588804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.313 ms 00:24:00.461 [2024-09-30 20:07:44.588814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.601598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.601635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:00.461 [2024-09-30 20:07:44.601646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.730 ms 00:24:00.461 [2024-09-30 20:07:44.601654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.614318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.614356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:00.461 [2024-09-30 20:07:44.614367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.618 ms 00:24:00.461 [2024-09-30 20:07:44.614376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.615033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.615053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:00.461 [2024-09-30 20:07:44.615065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:24:00.461 [2024-09-30 20:07:44.615075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.687630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.687678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:00.461 [2024-09-30 20:07:44.687694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.536 ms 00:24:00.461 [2024-09-30 20:07:44.687703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.699818] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:00.461 [2024-09-30 20:07:44.703773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.703809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:00.461 [2024-09-30 20:07:44.703821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.016 ms 00:24:00.461 [2024-09-30 20:07:44.703830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.703915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.703928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:00.461 [2024-09-30 20:07:44.703937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:00.461 [2024-09-30 20:07:44.703946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.704027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.704038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:00.461 [2024-09-30 20:07:44.704047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:00.461 [2024-09-30 20:07:44.704056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.704080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.704093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:00.461 [2024-09-30 20:07:44.704103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:00.461 [2024-09-30 20:07:44.704111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.704152] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:00.461 [2024-09-30 20:07:44.704164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.704173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:00.461 [2024-09-30 20:07:44.704182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:00.461 [2024-09-30 20:07:44.704195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.730358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.730399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:00.461 [2024-09-30 20:07:44.730415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.144 ms 00:24:00.461 [2024-09-30 20:07:44.730424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.730539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.461 [2024-09-30 20:07:44.730551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:00.461 [2024-09-30 20:07:44.730562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:00.461 [2024-09-30 20:07:44.730571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.461 [2024-09-30 20:07:44.732178] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.114 ms, result 0 00:24:49.757  Copying: 14/1024 [MB] (14 MBps) Copying: 24/1024 [MB] (10 MBps) Copying: 45/1024 [MB] (20 MBps) Copying: 60/1024 [MB] (15 MBps) Copying: 76/1024 [MB] (15 MBps) Copying: 99/1024 [MB] (22 MBps) Copying: 128/1024 [MB] (29 MBps) Copying: 149/1024 [MB] (20 MBps) Copying: 177/1024 [MB] (27 MBps) Copying: 201/1024 [MB] (23 MBps) Copying: 223/1024 [MB] (22 MBps) Copying: 239/1024 [MB] (15 MBps) Copying: 257/1024 [MB] (18 MBps) Copying: 276/1024 [MB] (19 MBps) Copying: 287/1024 [MB] (10 MBps) Copying: 313/1024 [MB] (26 MBps) Copying: 350/1024 [MB] (37 MBps) Copying: 366/1024 [MB] (16 MBps) Copying: 391/1024 [MB] (24 MBps) Copying: 409/1024 [MB] (18 MBps) Copying: 423/1024 [MB] (14 MBps) Copying: 438/1024 [MB] (15 MBps) Copying: 448/1024 [MB] (10 MBps) Copying: 461/1024 [MB] (12 MBps) Copying: 490/1024 [MB] (28 MBps) Copying: 502/1024 [MB] (12 MBps) Copying: 522/1024 [MB] (19 MBps) Copying: 545/1024 [MB] (23 MBps) Copying: 580/1024 [MB] (35 MBps) Copying: 608/1024 [MB] (27 MBps) Copying: 623/1024 [MB] (15 MBps) Copying: 642/1024 [MB] (19 MBps) Copying: 660/1024 [MB] (18 MBps) Copying: 673/1024 [MB] (12 MBps) Copying: 704/1024 [MB] (30 MBps) Copying: 731/1024 [MB] (27 MBps) Copying: 745/1024 [MB] (13 MBps) Copying: 759/1024 [MB] (13 MBps) Copying: 780/1024 [MB] (21 MBps) Copying: 793/1024 [MB] (13 MBps) Copying: 803/1024 [MB] (10 MBps) Copying: 818/1024 [MB] (14 MBps) Copying: 847/1024 [MB] (28 MBps) Copying: 871/1024 [MB] (24 MBps) Copying: 891/1024 [MB] (19 MBps) Copying: 938/1024 [MB] (47 MBps) Copying: 983/1024 [MB] (45 MBps) Copying: 998/1024 [MB] (14 MBps) Copying: 1016/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 20 MBps)[2024-09-30 20:08:34.048105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.757 [2024-09-30 20:08:34.048144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.757 [2024-09-30 20:08:34.048155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:49.757 [2024-09-30 20:08:34.048162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.757 [2024-09-30 20:08:34.048181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.757 [2024-09-30 20:08:34.050377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.757 [2024-09-30 20:08:34.050407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.757 [2024-09-30 20:08:34.050416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:24:49.757 [2024-09-30 20:08:34.050423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.757 [2024-09-30 20:08:34.052033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.757 [2024-09-30 20:08:34.052064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.757 [2024-09-30 20:08:34.052072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:24:49.757 [2024-09-30 20:08:34.052078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.757 [2024-09-30 20:08:34.052102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.757 [2024-09-30 20:08:34.052109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:49.757 [2024-09-30 20:08:34.052115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:49.757 [2024-09-30 20:08:34.052121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.757 [2024-09-30 20:08:34.052161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.757 [2024-09-30 20:08:34.052168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:49.757 [2024-09-30 20:08:34.052174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:49.757 [2024-09-30 20:08:34.052180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.757 [2024-09-30 20:08:34.052190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.757 [2024-09-30 20:08:34.052200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.757 [2024-09-30 20:08:34.052520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.758 [2024-09-30 20:08:34.052812] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.758 [2024-09-30 20:08:34.052818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:24:49.758 [2024-09-30 20:08:34.052825] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:49.758 [2024-09-30 20:08:34.052830] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:24:49.758 [2024-09-30 20:08:34.052836] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:49.758 [2024-09-30 20:08:34.052842] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:49.758 [2024-09-30 20:08:34.052847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.758 [2024-09-30 20:08:34.052853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.758 [2024-09-30 20:08:34.052858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.758 [2024-09-30 20:08:34.052863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.758 [2024-09-30 20:08:34.052869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.758 [2024-09-30 20:08:34.052874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.758 [2024-09-30 20:08:34.052880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.758 [2024-09-30 20:08:34.052886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:24:49.758 [2024-09-30 20:08:34.052896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.062720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.758 [2024-09-30 20:08:34.062749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.758 [2024-09-30 20:08:34.062757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.813 ms 00:24:49.758 [2024-09-30 20:08:34.062763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.063037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.758 [2024-09-30 20:08:34.063050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.758 [2024-09-30 20:08:34.063060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:49.758 [2024-09-30 20:08:34.063066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.085237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.758 [2024-09-30 20:08:34.085272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.758 [2024-09-30 20:08:34.085280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.758 [2024-09-30 20:08:34.085286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.085327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.758 [2024-09-30 20:08:34.085333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.758 [2024-09-30 20:08:34.085342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.758 [2024-09-30 20:08:34.085347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.085382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.758 [2024-09-30 20:08:34.085389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.758 [2024-09-30 20:08:34.085394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.758 [2024-09-30 20:08:34.085402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.758 [2024-09-30 20:08:34.085413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.758 [2024-09-30 20:08:34.085420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.758 [2024-09-30 20:08:34.085425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.758 [2024-09-30 20:08:34.085433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.143934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.143967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.019 [2024-09-30 20:08:34.143976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.019 [2024-09-30 20:08:34.143982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.192465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.192500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.019 [2024-09-30 20:08:34.192509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.019 [2024-09-30 20:08:34.192518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.192568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.192576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.019 [2024-09-30 20:08:34.192582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.019 [2024-09-30 20:08:34.192587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.192613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.192620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.019 [2024-09-30 20:08:34.192626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.019 [2024-09-30 20:08:34.192631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.192686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.192693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.019 [2024-09-30 20:08:34.192699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.019 [2024-09-30 20:08:34.192705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.019 [2024-09-30 20:08:34.192725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.019 [2024-09-30 20:08:34.192732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.020 [2024-09-30 20:08:34.192737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.020 [2024-09-30 20:08:34.192743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.020 [2024-09-30 20:08:34.192774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.020 [2024-09-30 20:08:34.192781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.020 [2024-09-30 20:08:34.192787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.020 [2024-09-30 20:08:34.192792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.020 [2024-09-30 20:08:34.192822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.020 [2024-09-30 20:08:34.192829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.020 [2024-09-30 20:08:34.192835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.020 [2024-09-30 20:08:34.192841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.020 [2024-09-30 20:08:34.192927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 144.803 ms, result 0 00:24:50.962 00:24:50.962 00:24:50.962 20:08:35 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:50.962 [2024-09-30 20:08:35.265392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:50.962 [2024-09-30 20:08:35.265512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79328 ] 00:24:51.224 [2024-09-30 20:08:35.413500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.224 [2024-09-30 20:08:35.552455] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.484 [2024-09-30 20:08:35.756704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.484 [2024-09-30 20:08:35.756750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.747 [2024-09-30 20:08:35.903499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.903542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.747 [2024-09-30 20:08:35.903555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.747 [2024-09-30 20:08:35.903567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.903613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.903624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.747 [2024-09-30 20:08:35.903632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:51.747 [2024-09-30 20:08:35.903639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.903656] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.747 [2024-09-30 20:08:35.904412] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.747 [2024-09-30 20:08:35.904440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.904448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.747 [2024-09-30 20:08:35.904456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:24:51.747 [2024-09-30 20:08:35.904464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905000] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:51.747 [2024-09-30 20:08:35.905050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.905062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.747 [2024-09-30 20:08:35.905072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:51.747 [2024-09-30 20:08:35.905080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.905133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.747 [2024-09-30 20:08:35.905140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:51.747 [2024-09-30 20:08:35.905149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.905461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.747 [2024-09-30 20:08:35.905471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:24:51.747 [2024-09-30 20:08:35.905478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.905552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.747 [2024-09-30 20:08:35.905562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:51.747 [2024-09-30 20:08:35.905569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.905599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.747 [2024-09-30 20:08:35.905607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:51.747 [2024-09-30 20:08:35.905613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.905631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.747 [2024-09-30 20:08:35.909165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.909192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.747 [2024-09-30 20:08:35.909202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.539 ms 00:24:51.747 [2024-09-30 20:08:35.909210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.909242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.909251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.747 [2024-09-30 20:08:35.909262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:51.747 [2024-09-30 20:08:35.909279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.909330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.747 [2024-09-30 20:08:35.909352] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.747 [2024-09-30 20:08:35.909388] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.747 [2024-09-30 20:08:35.909404] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.747 [2024-09-30 20:08:35.909507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.747 [2024-09-30 20:08:35.909521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.747 [2024-09-30 20:08:35.909532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.747 [2024-09-30 20:08:35.909543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.747 [2024-09-30 20:08:35.909553] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.747 [2024-09-30 20:08:35.909561] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.747 [2024-09-30 20:08:35.909571] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.747 [2024-09-30 20:08:35.909579] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.747 [2024-09-30 20:08:35.909587] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.747 [2024-09-30 20:08:35.909596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.909603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.747 [2024-09-30 20:08:35.909612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:51.747 [2024-09-30 20:08:35.909622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.909704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.747 [2024-09-30 20:08:35.909713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.747 [2024-09-30 20:08:35.909721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:51.747 [2024-09-30 20:08:35.909729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.747 [2024-09-30 20:08:35.909830] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.747 [2024-09-30 20:08:35.909847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.747 [2024-09-30 20:08:35.909856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.747 [2024-09-30 20:08:35.909865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.747 [2024-09-30 20:08:35.909876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.747 [2024-09-30 20:08:35.909884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.747 [2024-09-30 20:08:35.909892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.747 [2024-09-30 20:08:35.909900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.747 [2024-09-30 20:08:35.909908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.747 [2024-09-30 20:08:35.909915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.747 [2024-09-30 20:08:35.909923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.747 [2024-09-30 20:08:35.909931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.747 [2024-09-30 20:08:35.909939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.747 [2024-09-30 20:08:35.909947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.747 [2024-09-30 20:08:35.909954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.747 [2024-09-30 20:08:35.909966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.747 [2024-09-30 20:08:35.909972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.747 [2024-09-30 20:08:35.909979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.747 [2024-09-30 20:08:35.909985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.748 [2024-09-30 20:08:35.909992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.748 [2024-09-30 20:08:35.909999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.748 [2024-09-30 20:08:35.910018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.748 [2024-09-30 20:08:35.910037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.748 [2024-09-30 20:08:35.910056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.748 [2024-09-30 20:08:35.910076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.748 [2024-09-30 20:08:35.910089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.748 [2024-09-30 20:08:35.910095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.748 [2024-09-30 20:08:35.910101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.748 [2024-09-30 20:08:35.910108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.748 [2024-09-30 20:08:35.910114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.748 [2024-09-30 20:08:35.910120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.748 [2024-09-30 20:08:35.910133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.748 [2024-09-30 20:08:35.910139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.748 [2024-09-30 20:08:35.910153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.748 [2024-09-30 20:08:35.910161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.748 [2024-09-30 20:08:35.910175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.748 [2024-09-30 20:08:35.910182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.748 [2024-09-30 20:08:35.910188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.748 [2024-09-30 20:08:35.910195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.748 [2024-09-30 20:08:35.910201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.748 [2024-09-30 20:08:35.910208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.748 [2024-09-30 20:08:35.910216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.748 [2024-09-30 20:08:35.910225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.748 [2024-09-30 20:08:35.910240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.748 [2024-09-30 20:08:35.910247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.748 [2024-09-30 20:08:35.910254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.748 [2024-09-30 20:08:35.910261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.748 [2024-09-30 20:08:35.910277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.748 [2024-09-30 20:08:35.910285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.748 [2024-09-30 20:08:35.910292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.748 [2024-09-30 20:08:35.910299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.748 [2024-09-30 20:08:35.910306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.748 [2024-09-30 20:08:35.910341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.748 [2024-09-30 20:08:35.910348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.748 [2024-09-30 20:08:35.910363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.748 [2024-09-30 20:08:35.910371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.748 [2024-09-30 20:08:35.910378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.748 [2024-09-30 20:08:35.910385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.910392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.748 [2024-09-30 20:08:35.910404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:24:51.748 [2024-09-30 20:08:35.910411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.952928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.953054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.748 [2024-09-30 20:08:35.953075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.478 ms 00:24:51.748 [2024-09-30 20:08:35.953083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.953170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.953179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.748 [2024-09-30 20:08:35.953190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:51.748 [2024-09-30 20:08:35.953197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.983349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.983468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.748 [2024-09-30 20:08:35.983483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.099 ms 00:24:51.748 [2024-09-30 20:08:35.983491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.983521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.983529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.748 [2024-09-30 20:08:35.983537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.748 [2024-09-30 20:08:35.983544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.983623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.983638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.748 [2024-09-30 20:08:35.983646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:51.748 [2024-09-30 20:08:35.983653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.983762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.983770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.748 [2024-09-30 20:08:35.983778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:51.748 [2024-09-30 20:08:35.983785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.996232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.996263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.748 [2024-09-30 20:08:35.996296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:24:51.748 [2024-09-30 20:08:35.996303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:35.996422] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:51.748 [2024-09-30 20:08:35.996435] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.748 [2024-09-30 20:08:35.996444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:35.996451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.748 [2024-09-30 20:08:35.996460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:51.748 [2024-09-30 20:08:35.996467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:36.008851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:36.008877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.748 [2024-09-30 20:08:36.008888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.371 ms 00:24:51.748 [2024-09-30 20:08:36.008898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.748 [2024-09-30 20:08:36.009007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.748 [2024-09-30 20:08:36.009015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.748 [2024-09-30 20:08:36.009024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:51.748 [2024-09-30 20:08:36.009031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.009072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.009081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.749 [2024-09-30 20:08:36.009089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:24:51.749 [2024-09-30 20:08:36.009096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.009669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.009688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.749 [2024-09-30 20:08:36.009696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:24:51.749 [2024-09-30 20:08:36.009703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.009718] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:51.749 [2024-09-30 20:08:36.009728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.009735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.749 [2024-09-30 20:08:36.009743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:51.749 [2024-09-30 20:08:36.009750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.020769] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.749 [2024-09-30 20:08:36.020891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.020903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.749 [2024-09-30 20:08:36.020912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.124 ms 00:24:51.749 [2024-09-30 20:08:36.020919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.023033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.023136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.749 [2024-09-30 20:08:36.023149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:24:51.749 [2024-09-30 20:08:36.023157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.023232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.023246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.749 [2024-09-30 20:08:36.023254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:51.749 [2024-09-30 20:08:36.023262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.023300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.023308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.749 [2024-09-30 20:08:36.023316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.749 [2024-09-30 20:08:36.023323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.023348] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.749 [2024-09-30 20:08:36.023357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.023364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.749 [2024-09-30 20:08:36.023374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:51.749 [2024-09-30 20:08:36.023381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.047543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.047578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.749 [2024-09-30 20:08:36.047590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.143 ms 00:24:51.749 [2024-09-30 20:08:36.047598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.047667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.749 [2024-09-30 20:08:36.047680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.749 [2024-09-30 20:08:36.047689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:51.749 [2024-09-30 20:08:36.047696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.749 [2024-09-30 20:08:36.048557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.638 ms, result 0 00:25:52.035  Copying: 12/1024 [MB] (12 MBps) Copying: 35/1024 [MB] (23 MBps) Copying: 58/1024 [MB] (23 MBps) Copying: 82/1024 [MB] (23 MBps) Copying: 103/1024 [MB] (20 MBps) Copying: 119/1024 [MB] (15 MBps) Copying: 135/1024 [MB] (16 MBps) Copying: 155/1024 [MB] (19 MBps) Copying: 179/1024 [MB] (24 MBps) Copying: 202/1024 [MB] (23 MBps) Copying: 216/1024 [MB] (14 MBps) Copying: 237/1024 [MB] (20 MBps) Copying: 267/1024 [MB] (30 MBps) Copying: 285/1024 [MB] (18 MBps) Copying: 305/1024 [MB] (19 MBps) Copying: 319/1024 [MB] (13 MBps) Copying: 335/1024 [MB] (16 MBps) Copying: 359/1024 [MB] (24 MBps) Copying: 376/1024 [MB] (16 MBps) Copying: 398/1024 [MB] (21 MBps) Copying: 420/1024 [MB] (21 MBps) Copying: 440/1024 [MB] (20 MBps) Copying: 468/1024 [MB] (27 MBps) Copying: 482/1024 [MB] (14 MBps) Copying: 504/1024 [MB] (21 MBps) Copying: 526/1024 [MB] (21 MBps) Copying: 542/1024 [MB] (16 MBps) Copying: 556/1024 [MB] (13 MBps) Copying: 567/1024 [MB] (10 MBps) Copying: 577/1024 [MB] (10 MBps) Copying: 588/1024 [MB] (10 MBps) Copying: 599/1024 [MB] (10 MBps) Copying: 633/1024 [MB] (33 MBps) Copying: 643/1024 [MB] (10 MBps) Copying: 658/1024 [MB] (14 MBps) Copying: 688/1024 [MB] (30 MBps) Copying: 699/1024 [MB] (11 MBps) Copying: 710/1024 [MB] (10 MBps) Copying: 720/1024 [MB] (10 MBps) Copying: 731/1024 [MB] (10 MBps) Copying: 741/1024 [MB] (10 MBps) Copying: 756/1024 [MB] (14 MBps) Copying: 787/1024 [MB] (31 MBps) Copying: 800/1024 [MB] (12 MBps) Copying: 811/1024 [MB] (10 MBps) Copying: 824/1024 [MB] (12 MBps) Copying: 836/1024 [MB] (12 MBps) Copying: 852/1024 [MB] (15 MBps) Copying: 865/1024 [MB] (12 MBps) Copying: 879/1024 [MB] (14 MBps) Copying: 890/1024 [MB] (11 MBps) Copying: 901/1024 [MB] (11 MBps) Copying: 918/1024 [MB] (16 MBps) Copying: 932/1024 [MB] (13 MBps) Copying: 950/1024 [MB] (17 MBps) Copying: 968/1024 [MB] (17 MBps) Copying: 985/1024 [MB] (17 MBps) Copying: 1000/1024 [MB] (14 MBps) Copying: 1012/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 17 MBps)[2024-09-30 20:09:36.168660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.035 [2024-09-30 20:09:36.168731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:52.035 [2024-09-30 20:09:36.168747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:52.035 [2024-09-30 20:09:36.168756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.035 [2024-09-30 20:09:36.168779] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:52.035 [2024-09-30 20:09:36.171804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.035 [2024-09-30 20:09:36.172071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:52.035 [2024-09-30 20:09:36.172092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:25:52.035 [2024-09-30 20:09:36.172101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.035 [2024-09-30 20:09:36.172352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.035 [2024-09-30 20:09:36.172367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:52.035 [2024-09-30 20:09:36.172378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:25:52.035 [2024-09-30 20:09:36.172386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.035 [2024-09-30 20:09:36.172416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.035 [2024-09-30 20:09:36.172426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:52.035 [2024-09-30 20:09:36.172435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.035 [2024-09-30 20:09:36.172443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.035 [2024-09-30 20:09:36.172498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.035 [2024-09-30 20:09:36.172507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:52.035 [2024-09-30 20:09:36.172519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:52.035 [2024-09-30 20:09:36.172527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.035 [2024-09-30 20:09:36.172540] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:52.035 [2024-09-30 20:09:36.172552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:52.035 [2024-09-30 20:09:36.172827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.172999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:52.036 [2024-09-30 20:09:36.173361] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:52.036 [2024-09-30 20:09:36.173369] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:25:52.036 [2024-09-30 20:09:36.173377] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:52.036 [2024-09-30 20:09:36.173386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:52.036 [2024-09-30 20:09:36.173393] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:52.036 [2024-09-30 20:09:36.173401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:52.036 [2024-09-30 20:09:36.173409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:52.036 [2024-09-30 20:09:36.173419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:52.036 [2024-09-30 20:09:36.173427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:52.036 [2024-09-30 20:09:36.173436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:52.036 [2024-09-30 20:09:36.173443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:52.036 [2024-09-30 20:09:36.173450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.036 [2024-09-30 20:09:36.173458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:52.036 [2024-09-30 20:09:36.173465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:25:52.036 [2024-09-30 20:09:36.173473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.036 [2024-09-30 20:09:36.188645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.036 [2024-09-30 20:09:36.188690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:52.036 [2024-09-30 20:09:36.188703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.155 ms 00:25:52.036 [2024-09-30 20:09:36.188717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.036 [2024-09-30 20:09:36.189102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.036 [2024-09-30 20:09:36.189118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:52.036 [2024-09-30 20:09:36.189128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:25:52.036 [2024-09-30 20:09:36.189136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.036 [2024-09-30 20:09:36.220473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.036 [2024-09-30 20:09:36.220521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.036 [2024-09-30 20:09:36.220534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.036 [2024-09-30 20:09:36.220547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.036 [2024-09-30 20:09:36.220621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.036 [2024-09-30 20:09:36.220631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.036 [2024-09-30 20:09:36.220640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.036 [2024-09-30 20:09:36.220649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.036 [2024-09-30 20:09:36.220708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.036 [2024-09-30 20:09:36.220720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.037 [2024-09-30 20:09:36.220729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.220738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.220760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.220769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.037 [2024-09-30 20:09:36.220778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.220787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.304080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.304132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.037 [2024-09-30 20:09:36.304146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.304163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.037 [2024-09-30 20:09:36.372473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.037 [2024-09-30 20:09:36.372595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.037 [2024-09-30 20:09:36.372665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.037 [2024-09-30 20:09:36.372775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.037 [2024-09-30 20:09:36.372830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.037 [2024-09-30 20:09:36.372896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.372946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.037 [2024-09-30 20:09:36.372959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.037 [2024-09-30 20:09:36.372968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.037 [2024-09-30 20:09:36.372976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.037 [2024-09-30 20:09:36.373106] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 204.413 ms, result 0 00:25:52.980 00:25:52.980 00:25:52.980 20:09:37 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:55.534 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:55.534 20:09:39 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:55.534 [2024-09-30 20:09:39.390175] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:55.534 [2024-09-30 20:09:39.390329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79987 ] 00:25:55.534 [2024-09-30 20:09:39.541593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.534 [2024-09-30 20:09:39.688333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.534 [2024-09-30 20:09:39.893312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:55.534 [2024-09-30 20:09:39.893354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:55.796 [2024-09-30 20:09:40.052261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.052324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:55.796 [2024-09-30 20:09:40.052340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:55.796 [2024-09-30 20:09:40.052353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.052399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.052410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.796 [2024-09-30 20:09:40.052419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:55.796 [2024-09-30 20:09:40.052427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.052446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:55.796 [2024-09-30 20:09:40.053151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:55.796 [2024-09-30 20:09:40.053173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.053182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.796 [2024-09-30 20:09:40.053191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:25:55.796 [2024-09-30 20:09:40.053198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.053497] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:55.796 [2024-09-30 20:09:40.053522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.053531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:55.796 [2024-09-30 20:09:40.053540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:55.796 [2024-09-30 20:09:40.053548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.053593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.053602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:55.796 [2024-09-30 20:09:40.053611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:55.796 [2024-09-30 20:09:40.053621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.053886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.053898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.796 [2024-09-30 20:09:40.053906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:25:55.796 [2024-09-30 20:09:40.053913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.053977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.053986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.796 [2024-09-30 20:09:40.053997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:55.796 [2024-09-30 20:09:40.054004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.054024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.054032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:55.796 [2024-09-30 20:09:40.054041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:55.796 [2024-09-30 20:09:40.054048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.796 [2024-09-30 20:09:40.054065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:55.796 [2024-09-30 20:09:40.058066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.796 [2024-09-30 20:09:40.058096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.797 [2024-09-30 20:09:40.058106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:25:55.797 [2024-09-30 20:09:40.058113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.797 [2024-09-30 20:09:40.058146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.797 [2024-09-30 20:09:40.058154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:55.797 [2024-09-30 20:09:40.058165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:55.797 [2024-09-30 20:09:40.058172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.797 [2024-09-30 20:09:40.058219] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:55.797 [2024-09-30 20:09:40.058242] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:55.797 [2024-09-30 20:09:40.058290] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:55.797 [2024-09-30 20:09:40.058306] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:55.797 [2024-09-30 20:09:40.058411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:55.797 [2024-09-30 20:09:40.058425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:55.797 [2024-09-30 20:09:40.058437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:55.797 [2024-09-30 20:09:40.058448] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058466] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:55.797 [2024-09-30 20:09:40.058474] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:55.797 [2024-09-30 20:09:40.058482] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:55.797 [2024-09-30 20:09:40.058489] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:55.797 [2024-09-30 20:09:40.058497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.797 [2024-09-30 20:09:40.058504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:55.797 [2024-09-30 20:09:40.058512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:55.797 [2024-09-30 20:09:40.058523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.797 [2024-09-30 20:09:40.058606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.797 [2024-09-30 20:09:40.058614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:55.797 [2024-09-30 20:09:40.058622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:55.797 [2024-09-30 20:09:40.058629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.797 [2024-09-30 20:09:40.058748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:55.797 [2024-09-30 20:09:40.058760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:55.797 [2024-09-30 20:09:40.058770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:55.797 [2024-09-30 20:09:40.058794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:55.797 [2024-09-30 20:09:40.058818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:55.797 [2024-09-30 20:09:40.058833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:55.797 [2024-09-30 20:09:40.058841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:55.797 [2024-09-30 20:09:40.058848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:55.797 [2024-09-30 20:09:40.058854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:55.797 [2024-09-30 20:09:40.058861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:55.797 [2024-09-30 20:09:40.058874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:55.797 [2024-09-30 20:09:40.058887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:55.797 [2024-09-30 20:09:40.058909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:55.797 [2024-09-30 20:09:40.058929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:55.797 [2024-09-30 20:09:40.058950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:55.797 [2024-09-30 20:09:40.058970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.797 [2024-09-30 20:09:40.058982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:55.797 [2024-09-30 20:09:40.058989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:55.797 [2024-09-30 20:09:40.058996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:55.797 [2024-09-30 20:09:40.059003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:55.797 [2024-09-30 20:09:40.059009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:55.797 [2024-09-30 20:09:40.059015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:55.797 [2024-09-30 20:09:40.059022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:55.797 [2024-09-30 20:09:40.059030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:55.797 [2024-09-30 20:09:40.059037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.059043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:55.797 [2024-09-30 20:09:40.059050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:55.797 [2024-09-30 20:09:40.059057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.059063] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:55.797 [2024-09-30 20:09:40.059071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:55.797 [2024-09-30 20:09:40.059079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:55.797 [2024-09-30 20:09:40.059086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.797 [2024-09-30 20:09:40.059093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:55.797 [2024-09-30 20:09:40.059100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:55.797 [2024-09-30 20:09:40.059107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:55.797 [2024-09-30 20:09:40.059115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:55.797 [2024-09-30 20:09:40.059121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:55.797 [2024-09-30 20:09:40.059129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:55.797 [2024-09-30 20:09:40.059137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:55.797 [2024-09-30 20:09:40.059147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:55.797 [2024-09-30 20:09:40.059163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:55.797 [2024-09-30 20:09:40.059170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:55.797 [2024-09-30 20:09:40.059178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:55.797 [2024-09-30 20:09:40.059185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:55.797 [2024-09-30 20:09:40.059192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:55.797 [2024-09-30 20:09:40.059199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:55.797 [2024-09-30 20:09:40.059206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:55.797 [2024-09-30 20:09:40.059214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:55.797 [2024-09-30 20:09:40.059221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:55.797 [2024-09-30 20:09:40.059257] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:55.797 [2024-09-30 20:09:40.059277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:55.797 [2024-09-30 20:09:40.059287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:55.798 [2024-09-30 20:09:40.059294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:55.798 [2024-09-30 20:09:40.059302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:55.798 [2024-09-30 20:09:40.059309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:55.798 [2024-09-30 20:09:40.059317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.059324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:55.798 [2024-09-30 20:09:40.059335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:25:55.798 [2024-09-30 20:09:40.059343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.097917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.098062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.798 [2024-09-30 20:09:40.098129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.533 ms 00:25:55.798 [2024-09-30 20:09:40.098153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.098254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.098298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:55.798 [2024-09-30 20:09:40.098324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:55.798 [2024-09-30 20:09:40.098343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.132366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.132495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.798 [2024-09-30 20:09:40.132549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.951 ms 00:25:55.798 [2024-09-30 20:09:40.132572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.132618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.132641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.798 [2024-09-30 20:09:40.132661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:55.798 [2024-09-30 20:09:40.132680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.132788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.132821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.798 [2024-09-30 20:09:40.132844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:55.798 [2024-09-30 20:09:40.132908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.133054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.133086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.798 [2024-09-30 20:09:40.133160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:55.798 [2024-09-30 20:09:40.133184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.147457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.147574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.798 [2024-09-30 20:09:40.147624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.239 ms 00:25:55.798 [2024-09-30 20:09:40.147648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.798 [2024-09-30 20:09:40.147799] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:55.798 [2024-09-30 20:09:40.147841] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:55.798 [2024-09-30 20:09:40.147921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.798 [2024-09-30 20:09:40.147943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:55.798 [2024-09-30 20:09:40.147964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:25:55.798 [2024-09-30 20:09:40.147984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.059 [2024-09-30 20:09:40.160282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.059 [2024-09-30 20:09:40.160408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.059 [2024-09-30 20:09:40.160461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:25:56.059 [2024-09-30 20:09:40.160490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.059 [2024-09-30 20:09:40.160628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.059 [2024-09-30 20:09:40.160653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.059 [2024-09-30 20:09:40.160674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:56.059 [2024-09-30 20:09:40.160741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.059 [2024-09-30 20:09:40.160806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.059 [2024-09-30 20:09:40.160832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.059 [2024-09-30 20:09:40.160852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:56.059 [2024-09-30 20:09:40.160871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.059 [2024-09-30 20:09:40.161502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.059 [2024-09-30 20:09:40.161593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.059 [2024-09-30 20:09:40.161640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:25:56.059 [2024-09-30 20:09:40.161664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.161696] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:56.060 [2024-09-30 20:09:40.161729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.161748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.060 [2024-09-30 20:09:40.161768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:56.060 [2024-09-30 20:09:40.161786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.174413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:56.060 [2024-09-30 20:09:40.174645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.174680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.060 [2024-09-30 20:09:40.174819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.829 ms 00:25:56.060 [2024-09-30 20:09:40.174842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.176991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.177095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.060 [2024-09-30 20:09:40.177147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:25:56.060 [2024-09-30 20:09:40.177172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.177292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.177327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.060 [2024-09-30 20:09:40.177348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:56.060 [2024-09-30 20:09:40.177445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.177482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.177505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.060 [2024-09-30 20:09:40.177525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.060 [2024-09-30 20:09:40.177583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.177637] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.060 [2024-09-30 20:09:40.177663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.177726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.060 [2024-09-30 20:09:40.177750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:56.060 [2024-09-30 20:09:40.177804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.203881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.204021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.060 [2024-09-30 20:09:40.204041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.038 ms 00:25:56.060 [2024-09-30 20:09:40.204050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.204125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.060 [2024-09-30 20:09:40.204140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.060 [2024-09-30 20:09:40.204149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:56.060 [2024-09-30 20:09:40.204157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.060 [2024-09-30 20:09:40.205348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.584 ms, result 0 00:26:56.763  Copying: 16/1024 [MB] (16 MBps) Copying: 31/1024 [MB] (14 MBps) Copying: 57/1024 [MB] (26 MBps) Copying: 69/1024 [MB] (11 MBps) Copying: 86/1024 [MB] (16 MBps) Copying: 103/1024 [MB] (17 MBps) Copying: 120/1024 [MB] (17 MBps) Copying: 135/1024 [MB] (15 MBps) Copying: 150/1024 [MB] (14 MBps) Copying: 164/1024 [MB] (13 MBps) Copying: 179/1024 [MB] (15 MBps) Copying: 193/1024 [MB] (14 MBps) Copying: 207/1024 [MB] (13 MBps) Copying: 225/1024 [MB] (17 MBps) Copying: 237/1024 [MB] (12 MBps) Copying: 253/1024 [MB] (16 MBps) Copying: 269/1024 [MB] (16 MBps) Copying: 290/1024 [MB] (20 MBps) Copying: 334/1024 [MB] (44 MBps) Copying: 372/1024 [MB] (37 MBps) Copying: 385/1024 [MB] (13 MBps) Copying: 396/1024 [MB] (10 MBps) Copying: 412/1024 [MB] (15 MBps) Copying: 431/1024 [MB] (19 MBps) Copying: 447/1024 [MB] (15 MBps) Copying: 462/1024 [MB] (15 MBps) Copying: 480/1024 [MB] (17 MBps) Copying: 498/1024 [MB] (18 MBps) Copying: 510/1024 [MB] (11 MBps) Copying: 521/1024 [MB] (11 MBps) Copying: 542/1024 [MB] (20 MBps) Copying: 555/1024 [MB] (12 MBps) Copying: 565/1024 [MB] (10 MBps) Copying: 582/1024 [MB] (17 MBps) Copying: 601/1024 [MB] (18 MBps) Copying: 612/1024 [MB] (11 MBps) Copying: 627/1024 [MB] (15 MBps) Copying: 640/1024 [MB] (12 MBps) Copying: 657/1024 [MB] (16 MBps) Copying: 675/1024 [MB] (17 MBps) Copying: 692/1024 [MB] (16 MBps) Copying: 702/1024 [MB] (10 MBps) Copying: 713/1024 [MB] (10 MBps) Copying: 745/1024 [MB] (32 MBps) Copying: 755/1024 [MB] (10 MBps) Copying: 784004/1048576 [kB] (10212 kBps) Copying: 783/1024 [MB] (18 MBps) Copying: 807/1024 [MB] (24 MBps) Copying: 820/1024 [MB] (13 MBps) Copying: 846/1024 [MB] (25 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 887/1024 [MB] (18 MBps) Copying: 900/1024 [MB] (12 MBps) Copying: 912/1024 [MB] (12 MBps) Copying: 940/1024 [MB] (28 MBps) Copying: 953/1024 [MB] (13 MBps) Copying: 964/1024 [MB] (10 MBps) Copying: 980/1024 [MB] (15 MBps) Copying: 992/1024 [MB] (12 MBps) Copying: 1004/1024 [MB] (11 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-30 20:10:41.115870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.763 [2024-09-30 20:10:41.115908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:56.763 [2024-09-30 20:10:41.115921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:56.763 [2024-09-30 20:10:41.115927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.763 [2024-09-30 20:10:41.115943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:56.763 [2024-09-30 20:10:41.118062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.763 [2024-09-30 20:10:41.118087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:56.763 [2024-09-30 20:10:41.118095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:26:56.763 [2024-09-30 20:10:41.118102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.763 [2024-09-30 20:10:41.120420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.763 [2024-09-30 20:10:41.120805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:56.763 [2024-09-30 20:10:41.120857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:26:56.763 [2024-09-30 20:10:41.120885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.763 [2024-09-30 20:10:41.120969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.763 [2024-09-30 20:10:41.120995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:56.763 [2024-09-30 20:10:41.121021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:56.763 [2024-09-30 20:10:41.121044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.763 [2024-09-30 20:10:41.121158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.763 [2024-09-30 20:10:41.121197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:56.763 [2024-09-30 20:10:41.121229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:56.763 [2024-09-30 20:10:41.121251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:56.763 [2024-09-30 20:10:41.121328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:56.763 [2024-09-30 20:10:41.121365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 256 / 261120 wr_cnt: 1 state: open 00:26:56.763 [2024-09-30 20:10:41.121394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:56.763 [2024-09-30 20:10:41.121694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.121995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.122993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:56.764 [2024-09-30 20:10:41.123926] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:56.764 [2024-09-30 20:10:41.123960] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:26:56.764 [2024-09-30 20:10:41.123984] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 256 00:26:56.764 [2024-09-30 20:10:41.124007] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 288 00:26:56.764 [2024-09-30 20:10:41.124028] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 256 00:26:56.765 [2024-09-30 20:10:41.124052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1250 00:26:56.765 [2024-09-30 20:10:41.124074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:56.765 [2024-09-30 20:10:41.124104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:56.765 [2024-09-30 20:10:41.124127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:56.765 [2024-09-30 20:10:41.124147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:56.765 [2024-09-30 20:10:41.124167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:56.765 [2024-09-30 20:10:41.124189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:56.765 [2024-09-30 20:10:41.124212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:56.765 [2024-09-30 20:10:41.124236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:26:56.765 [2024-09-30 20:10:41.124259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.138937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.027 [2024-09-30 20:10:41.138969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:57.027 [2024-09-30 20:10:41.138980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.571 ms 00:26:57.027 [2024-09-30 20:10:41.138992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.139369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.027 [2024-09-30 20:10:41.139415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:57.027 [2024-09-30 20:10:41.139426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:26:57.027 [2024-09-30 20:10:41.139433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.169578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.169614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.027 [2024-09-30 20:10:41.169626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.169635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.169694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.169703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.027 [2024-09-30 20:10:41.169713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.169722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.169796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.169808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.027 [2024-09-30 20:10:41.169817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.169830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.169849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.169859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.027 [2024-09-30 20:10:41.169868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.169878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.254350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.254560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.027 [2024-09-30 20:10:41.254591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.254600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.329898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.027 [2024-09-30 20:10:41.330164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.027 [2024-09-30 20:10:41.330309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.027 [2024-09-30 20:10:41.330415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.027 [2024-09-30 20:10:41.330550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:57.027 [2024-09-30 20:10:41.330610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.027 [2024-09-30 20:10:41.330693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.330795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.027 [2024-09-30 20:10:41.330807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.027 [2024-09-30 20:10:41.330817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.027 [2024-09-30 20:10:41.330827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.027 [2024-09-30 20:10:41.331000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.065 ms, result 0 00:26:58.415 00:26:58.415 00:26:58.415 20:10:42 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:58.415 [2024-09-30 20:10:42.649001] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:58.415 [2024-09-30 20:10:42.649167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80621 ] 00:26:58.676 [2024-09-30 20:10:42.805820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.938 [2024-09-30 20:10:43.081867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.200 [2024-09-30 20:10:43.413200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.200 [2024-09-30 20:10:43.413317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.463 [2024-09-30 20:10:43.578698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.579029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:59.463 [2024-09-30 20:10:43.579060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.463 [2024-09-30 20:10:43.579081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.579162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.579174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:59.463 [2024-09-30 20:10:43.579184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:59.463 [2024-09-30 20:10:43.579193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.579217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:59.463 [2024-09-30 20:10:43.580006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:59.463 [2024-09-30 20:10:43.580029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.580039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:59.463 [2024-09-30 20:10:43.580050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:26:59.463 [2024-09-30 20:10:43.580059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.580410] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:59.463 [2024-09-30 20:10:43.580463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.580474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:59.463 [2024-09-30 20:10:43.580486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:59.463 [2024-09-30 20:10:43.580495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.580560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.580570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:59.463 [2024-09-30 20:10:43.580581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:59.463 [2024-09-30 20:10:43.580593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.580924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.580939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:59.463 [2024-09-30 20:10:43.580948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:26:59.463 [2024-09-30 20:10:43.580958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.463 [2024-09-30 20:10:43.581033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.463 [2024-09-30 20:10:43.581044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:59.464 [2024-09-30 20:10:43.581056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:59.464 [2024-09-30 20:10:43.581065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.581089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.464 [2024-09-30 20:10:43.581100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:59.464 [2024-09-30 20:10:43.581110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:59.464 [2024-09-30 20:10:43.581118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.581141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:59.464 [2024-09-30 20:10:43.586171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.464 [2024-09-30 20:10:43.586218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:59.464 [2024-09-30 20:10:43.586230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:26:59.464 [2024-09-30 20:10:43.586239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.586293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.464 [2024-09-30 20:10:43.586302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:59.464 [2024-09-30 20:10:43.586316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:59.464 [2024-09-30 20:10:43.586324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.586387] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:59.464 [2024-09-30 20:10:43.586419] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:59.464 [2024-09-30 20:10:43.586462] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:59.464 [2024-09-30 20:10:43.586480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:59.464 [2024-09-30 20:10:43.586592] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:59.464 [2024-09-30 20:10:43.586610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:59.464 [2024-09-30 20:10:43.586621] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:59.464 [2024-09-30 20:10:43.586632] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:59.464 [2024-09-30 20:10:43.586642] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:59.464 [2024-09-30 20:10:43.586651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:59.464 [2024-09-30 20:10:43.586662] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:59.464 [2024-09-30 20:10:43.586672] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:59.464 [2024-09-30 20:10:43.586681] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:59.464 [2024-09-30 20:10:43.586691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.464 [2024-09-30 20:10:43.586700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:59.464 [2024-09-30 20:10:43.586708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:26:59.464 [2024-09-30 20:10:43.586719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.586821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.464 [2024-09-30 20:10:43.586834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:59.464 [2024-09-30 20:10:43.586844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:26:59.464 [2024-09-30 20:10:43.586854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.464 [2024-09-30 20:10:43.586960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:59.464 [2024-09-30 20:10:43.586982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:59.464 [2024-09-30 20:10:43.586992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:59.464 [2024-09-30 20:10:43.587027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:59.464 [2024-09-30 20:10:43.587062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.464 [2024-09-30 20:10:43.587081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:59.464 [2024-09-30 20:10:43.587093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:59.464 [2024-09-30 20:10:43.587107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.464 [2024-09-30 20:10:43.587114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:59.464 [2024-09-30 20:10:43.587121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:59.464 [2024-09-30 20:10:43.587141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:59.464 [2024-09-30 20:10:43.587159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:59.464 [2024-09-30 20:10:43.587189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:59.464 [2024-09-30 20:10:43.587220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:59.464 [2024-09-30 20:10:43.587247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:59.464 [2024-09-30 20:10:43.587290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:59.464 [2024-09-30 20:10:43.587312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.464 [2024-09-30 20:10:43.587327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:59.464 [2024-09-30 20:10:43.587338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:59.464 [2024-09-30 20:10:43.587346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.464 [2024-09-30 20:10:43.587354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:59.464 [2024-09-30 20:10:43.587366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:59.464 [2024-09-30 20:10:43.587373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:59.464 [2024-09-30 20:10:43.587389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:59.464 [2024-09-30 20:10:43.587397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:59.464 [2024-09-30 20:10:43.587417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:59.464 [2024-09-30 20:10:43.587425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.464 [2024-09-30 20:10:43.587441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:59.464 [2024-09-30 20:10:43.587449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:59.464 [2024-09-30 20:10:43.587456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:59.464 [2024-09-30 20:10:43.587463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:59.464 [2024-09-30 20:10:43.587470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:59.464 [2024-09-30 20:10:43.587477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:59.464 [2024-09-30 20:10:43.587487] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:59.464 [2024-09-30 20:10:43.587498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.464 [2024-09-30 20:10:43.587508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:59.464 [2024-09-30 20:10:43.587516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:59.464 [2024-09-30 20:10:43.587525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:59.464 [2024-09-30 20:10:43.587533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:59.464 [2024-09-30 20:10:43.587544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:59.464 [2024-09-30 20:10:43.587552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:59.464 [2024-09-30 20:10:43.587571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:59.464 [2024-09-30 20:10:43.587579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:59.464 [2024-09-30 20:10:43.587586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:59.464 [2024-09-30 20:10:43.587593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:59.465 [2024-09-30 20:10:43.587630] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:59.465 [2024-09-30 20:10:43.587638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:59.465 [2024-09-30 20:10:43.587657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:59.465 [2024-09-30 20:10:43.587666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:59.465 [2024-09-30 20:10:43.587673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:59.465 [2024-09-30 20:10:43.587683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.587693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:59.465 [2024-09-30 20:10:43.587706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:26:59.465 [2024-09-30 20:10:43.587714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.629749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.629806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:59.465 [2024-09-30 20:10:43.629824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.989 ms 00:26:59.465 [2024-09-30 20:10:43.629833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.629938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.629949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:59.465 [2024-09-30 20:10:43.629964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:59.465 [2024-09-30 20:10:43.629973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.669926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.669980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:59.465 [2024-09-30 20:10:43.669992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.886 ms 00:26:59.465 [2024-09-30 20:10:43.670000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.670044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.670054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:59.465 [2024-09-30 20:10:43.670063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:59.465 [2024-09-30 20:10:43.670072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.670185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.670204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:59.465 [2024-09-30 20:10:43.670214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:59.465 [2024-09-30 20:10:43.670224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.670398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.670412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:59.465 [2024-09-30 20:10:43.670422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:59.465 [2024-09-30 20:10:43.670432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.687263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.687325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:59.465 [2024-09-30 20:10:43.687338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.809 ms 00:26:59.465 [2024-09-30 20:10:43.687347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.687519] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:26:59.465 [2024-09-30 20:10:43.687535] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:59.465 [2024-09-30 20:10:43.687546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.687556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:59.465 [2024-09-30 20:10:43.687567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:59.465 [2024-09-30 20:10:43.687575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.700105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.700314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:59.465 [2024-09-30 20:10:43.700387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.506 ms 00:26:59.465 [2024-09-30 20:10:43.700425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.700596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.700679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:59.465 [2024-09-30 20:10:43.700705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:26:59.465 [2024-09-30 20:10:43.700725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.700831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.700861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:59.465 [2024-09-30 20:10:43.700884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:59.465 [2024-09-30 20:10:43.700905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.701571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.701836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:59.465 [2024-09-30 20:10:43.701916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:26:59.465 [2024-09-30 20:10:43.701941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.701983] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:59.465 [2024-09-30 20:10:43.702019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.702040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:59.465 [2024-09-30 20:10:43.702060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:59.465 [2024-09-30 20:10:43.702079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.716395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:59.465 [2024-09-30 20:10:43.716691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.716713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:59.465 [2024-09-30 20:10:43.716726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.578 ms 00:26:59.465 [2024-09-30 20:10:43.716735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.719237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.719292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:59.465 [2024-09-30 20:10:43.719304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:26:59.465 [2024-09-30 20:10:43.719313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.719409] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:26:59.465 [2024-09-30 20:10:43.719466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.719476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:59.465 [2024-09-30 20:10:43.719487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:59.465 [2024-09-30 20:10:43.719495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.719523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.719551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:59.465 [2024-09-30 20:10:43.719561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.465 [2024-09-30 20:10:43.719570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.719611] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:59.465 [2024-09-30 20:10:43.719622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.719635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:59.465 [2024-09-30 20:10:43.719643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:59.465 [2024-09-30 20:10:43.719651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.748732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.748789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:59.465 [2024-09-30 20:10:43.748804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.060 ms 00:26:59.465 [2024-09-30 20:10:43.748814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.748924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.465 [2024-09-30 20:10:43.748936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:59.465 [2024-09-30 20:10:43.748946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:59.465 [2024-09-30 20:10:43.748954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.465 [2024-09-30 20:10:43.750735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.465 ms, result 0 00:28:14.070  Copying: 900/1048576 [kB] (900 kBps) Copying: 15/1024 [MB] (14 MBps) Copying: 28/1024 [MB] (13 MBps) Copying: 44/1024 [MB] (15 MBps) Copying: 59/1024 [MB] (14 MBps) Copying: 78/1024 [MB] (19 MBps) Copying: 99/1024 [MB] (21 MBps) Copying: 115/1024 [MB] (16 MBps) Copying: 136/1024 [MB] (21 MBps) Copying: 152/1024 [MB] (15 MBps) Copying: 169/1024 [MB] (17 MBps) Copying: 182/1024 [MB] (13 MBps) Copying: 192/1024 [MB] (10 MBps) Copying: 207/1024 [MB] (14 MBps) Copying: 218/1024 [MB] (11 MBps) Copying: 231/1024 [MB] (12 MBps) Copying: 243/1024 [MB] (12 MBps) Copying: 254/1024 [MB] (11 MBps) Copying: 266/1024 [MB] (12 MBps) Copying: 278/1024 [MB] (11 MBps) Copying: 289/1024 [MB] (10 MBps) Copying: 300/1024 [MB] (11 MBps) Copying: 312/1024 [MB] (12 MBps) Copying: 323/1024 [MB] (10 MBps) Copying: 336/1024 [MB] (12 MBps) Copying: 349/1024 [MB] (12 MBps) Copying: 360/1024 [MB] (11 MBps) Copying: 376/1024 [MB] (16 MBps) Copying: 396/1024 [MB] (20 MBps) Copying: 415/1024 [MB] (18 MBps) Copying: 427/1024 [MB] (11 MBps) Copying: 451/1024 [MB] (23 MBps) Copying: 463/1024 [MB] (12 MBps) Copying: 475/1024 [MB] (12 MBps) Copying: 490/1024 [MB] (14 MBps) Copying: 504/1024 [MB] (13 MBps) Copying: 519/1024 [MB] (15 MBps) Copying: 530/1024 [MB] (11 MBps) Copying: 542/1024 [MB] (12 MBps) Copying: 563/1024 [MB] (20 MBps) Copying: 579/1024 [MB] (15 MBps) Copying: 597/1024 [MB] (18 MBps) Copying: 615/1024 [MB] (18 MBps) Copying: 626/1024 [MB] (10 MBps) Copying: 640/1024 [MB] (14 MBps) Copying: 652/1024 [MB] (11 MBps) Copying: 668/1024 [MB] (16 MBps) Copying: 689/1024 [MB] (20 MBps) Copying: 702/1024 [MB] (13 MBps) Copying: 721/1024 [MB] (19 MBps) Copying: 734/1024 [MB] (12 MBps) Copying: 746/1024 [MB] (12 MBps) Copying: 764/1024 [MB] (17 MBps) Copying: 774/1024 [MB] (10 MBps) Copying: 785/1024 [MB] (10 MBps) Copying: 796/1024 [MB] (10 MBps) Copying: 806/1024 [MB] (10 MBps) Copying: 818/1024 [MB] (11 MBps) Copying: 837/1024 [MB] (18 MBps) Copying: 849/1024 [MB] (12 MBps) Copying: 862/1024 [MB] (12 MBps) Copying: 874/1024 [MB] (12 MBps) Copying: 885/1024 [MB] (10 MBps) Copying: 898/1024 [MB] (12 MBps) Copying: 910/1024 [MB] (12 MBps) Copying: 923/1024 [MB] (13 MBps) Copying: 936/1024 [MB] (12 MBps) Copying: 948/1024 [MB] (11 MBps) Copying: 959/1024 [MB] (11 MBps) Copying: 971/1024 [MB] (12 MBps) Copying: 983/1024 [MB] (11 MBps) Copying: 993/1024 [MB] (10 MBps) Copying: 1007/1024 [MB] (14 MBps) Copying: 1018/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 13 MBps)[2024-09-30 20:11:58.395124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.070 [2024-09-30 20:11:58.395169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:14.070 [2024-09-30 20:11:58.395182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.070 [2024-09-30 20:11:58.395189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.070 [2024-09-30 20:11:58.395206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.070 [2024-09-30 20:11:58.397459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.070 [2024-09-30 20:11:58.397482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:14.070 [2024-09-30 20:11:58.397491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:28:14.070 [2024-09-30 20:11:58.397498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.070 [2024-09-30 20:11:58.397663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.070 [2024-09-30 20:11:58.397670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:14.070 [2024-09-30 20:11:58.397677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:28:14.070 [2024-09-30 20:11:58.397683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.070 [2024-09-30 20:11:58.397705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.070 [2024-09-30 20:11:58.397715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:14.070 [2024-09-30 20:11:58.397721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:14.070 [2024-09-30 20:11:58.397727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.070 [2024-09-30 20:11:58.397769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.070 [2024-09-30 20:11:58.397776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:14.070 [2024-09-30 20:11:58.397782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:14.070 [2024-09-30 20:11:58.397788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.070 [2024-09-30 20:11:58.397798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:14.070 [2024-09-30 20:11:58.397808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:28:14.070 [2024-09-30 20:11:58.397816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.070 [2024-09-30 20:11:58.397873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.397999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:14.071 [2024-09-30 20:11:58.398406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:14.072 [2024-09-30 20:11:58.398414] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cf7d4fe-8411-4a55-85a0-d1b8ab869310 00:28:14.072 [2024-09-30 20:11:58.398420] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:28:14.072 [2024-09-30 20:11:58.398426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131616 00:28:14.072 [2024-09-30 20:11:58.398432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131584 00:28:14.072 [2024-09-30 20:11:58.398438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:28:14.072 [2024-09-30 20:11:58.398444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:14.072 [2024-09-30 20:11:58.398450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:14.072 [2024-09-30 20:11:58.398456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:14.072 [2024-09-30 20:11:58.398461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:14.072 [2024-09-30 20:11:58.398466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:14.072 [2024-09-30 20:11:58.398471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.072 [2024-09-30 20:11:58.398477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:14.072 [2024-09-30 20:11:58.398483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:28:14.072 [2024-09-30 20:11:58.398489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.409841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.072 [2024-09-30 20:11:58.409865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:14.072 [2024-09-30 20:11:58.409873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.339 ms 00:28:14.072 [2024-09-30 20:11:58.409879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.410165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.072 [2024-09-30 20:11:58.410173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:14.072 [2024-09-30 20:11:58.410183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:14.072 [2024-09-30 20:11:58.410189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.434066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.072 [2024-09-30 20:11:58.434096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.072 [2024-09-30 20:11:58.434103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.072 [2024-09-30 20:11:58.434110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.434159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.072 [2024-09-30 20:11:58.434167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.072 [2024-09-30 20:11:58.434176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.072 [2024-09-30 20:11:58.434182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.434221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.072 [2024-09-30 20:11:58.434229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.072 [2024-09-30 20:11:58.434236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.072 [2024-09-30 20:11:58.434241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.072 [2024-09-30 20:11:58.434253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.072 [2024-09-30 20:11:58.434259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.072 [2024-09-30 20:11:58.434278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.072 [2024-09-30 20:11:58.434288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.497998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.498036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.330 [2024-09-30 20:11:58.498051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.498057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.330 [2024-09-30 20:11:58.549656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.330 [2024-09-30 20:11:58.549747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.330 [2024-09-30 20:11:58.549795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.330 [2024-09-30 20:11:58.549880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.330 [2024-09-30 20:11:58.549923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.549963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.549970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.330 [2024-09-30 20:11:58.549976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.549982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.550020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.330 [2024-09-30 20:11:58.550028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.330 [2024-09-30 20:11:58.550035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.330 [2024-09-30 20:11:58.550041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.330 [2024-09-30 20:11:58.550146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 154.995 ms, result 0 00:28:14.897 00:28:14.897 00:28:14.897 20:11:59 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:17.431 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:17.431 Process with pid 78577 is not found 00:28:17.431 Remove shared memory files 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78577 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78577 ']' 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78577 00:28:17.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78577) - No such process 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78577 is not found' 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:17.431 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_band_md /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_l2p_l1 /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_l2p_l2 /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_l2p_l2_ctx /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_nvc_md /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_p2l_pool /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_sb /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_sb_shm /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_trim_bitmap /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_trim_log /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_trim_md /dev/hugepages/ftl_1cf7d4fe-8411-4a55-85a0-d1b8ab869310_vmap 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:17.432 ************************************ 00:28:17.432 END TEST ftl_restore_fast 00:28:17.432 ************************************ 00:28:17.432 00:28:17.432 real 4m37.102s 00:28:17.432 user 4m25.255s 00:28:17.432 sys 0m11.706s 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.432 20:12:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@14 -- # killprocess 72653 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@950 -- # '[' -z 72653 ']' 00:28:17.432 Process with pid 72653 is not found 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@954 -- # kill -0 72653 00:28:17.432 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72653) - No such process 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72653 is not found' 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81449 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81449 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@831 -- # '[' -z 81449 ']' 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.432 20:12:01 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.432 20:12:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.432 [2024-09-30 20:12:01.711791] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:17.432 [2024-09-30 20:12:01.712236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81449 ] 00:28:17.691 [2024-09-30 20:12:01.856900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.020 [2024-09-30 20:12:02.128464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.589 20:12:02 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.590 20:12:02 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:18.590 20:12:02 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:18.850 nvme0n1 00:28:19.110 20:12:03 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:19.110 20:12:03 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:19.110 20:12:03 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:19.110 20:12:03 ftl -- ftl/common.sh@28 -- # stores=b3f35580-98f9-4e37-82d3-0047165802dd 00:28:19.110 20:12:03 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:19.110 20:12:03 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3f35580-98f9-4e37-82d3-0047165802dd 00:28:19.369 20:12:03 ftl -- ftl/ftl.sh@23 -- # killprocess 81449 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@950 -- # '[' -z 81449 ']' 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@954 -- # kill -0 81449 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@955 -- # uname 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81449 00:28:19.369 killing process with pid 81449 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81449' 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@969 -- # kill 81449 00:28:19.369 20:12:03 ftl -- common/autotest_common.sh@974 -- # wait 81449 00:28:20.745 20:12:05 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:21.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:21.003 Waiting for block devices as requested 00:28:21.003 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.003 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.263 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:26.544 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:26.544 20:12:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:26.544 Remove shared memory files 00:28:26.544 20:12:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:26.544 20:12:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:26.544 20:12:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:26.544 20:12:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:26.544 20:12:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:26.544 20:12:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:26.544 00:28:26.544 real 12m53.851s 00:28:26.544 user 14m40.357s 00:28:26.544 sys 1m26.241s 00:28:26.544 ************************************ 00:28:26.544 20:12:10 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.544 20:12:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 END TEST ftl 00:28:26.544 ************************************ 00:28:26.544 20:12:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:26.544 20:12:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:26.544 20:12:10 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:26.544 20:12:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:26.544 20:12:10 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:26.544 20:12:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:26.544 20:12:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:26.544 20:12:10 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:26.544 20:12:10 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:26.544 20:12:10 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:26.544 20:12:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:26.544 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 20:12:10 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:26.544 20:12:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:26.544 20:12:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:26.544 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:28:27.929 INFO: APP EXITING 00:28:27.929 INFO: killing all VMs 00:28:27.929 INFO: killing vhost app 00:28:27.929 INFO: EXIT DONE 00:28:28.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:28.763 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:28.763 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:28.763 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:28.763 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:29.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:29.305 Cleaning 00:28:29.305 Removing: /var/run/dpdk/spdk0/config 00:28:29.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:29.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:29.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:29.305 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:29.305 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:29.306 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:29.306 Removing: /var/run/dpdk/spdk0 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57259 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57461 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57674 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57767 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57806 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57929 00:28:29.306 Removing: /var/run/dpdk/spdk_pid57947 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58140 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58240 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58335 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58441 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58538 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58578 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58620 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58690 00:28:29.306 Removing: /var/run/dpdk/spdk_pid58802 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59227 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59291 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59343 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59359 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59450 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59466 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59563 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59579 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59637 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59655 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59708 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59725 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59881 00:28:29.306 Removing: /var/run/dpdk/spdk_pid59917 00:28:29.306 Removing: /var/run/dpdk/spdk_pid60001 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60178 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60257 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60293 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60720 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60818 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60929 00:28:29.591 Removing: /var/run/dpdk/spdk_pid60982 00:28:29.591 Removing: /var/run/dpdk/spdk_pid61008 00:28:29.591 Removing: /var/run/dpdk/spdk_pid61086 00:28:29.591 Removing: /var/run/dpdk/spdk_pid61713 00:28:29.591 Removing: /var/run/dpdk/spdk_pid61749 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62216 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62314 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62429 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62482 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62512 00:28:29.591 Removing: /var/run/dpdk/spdk_pid62533 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64369 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64506 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64510 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64522 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64562 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64566 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64578 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64623 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64627 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64639 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64685 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64689 00:28:29.591 Removing: /var/run/dpdk/spdk_pid64701 00:28:29.591 Removing: /var/run/dpdk/spdk_pid66057 00:28:29.591 Removing: /var/run/dpdk/spdk_pid66159 00:28:29.591 Removing: /var/run/dpdk/spdk_pid67565 00:28:29.591 Removing: /var/run/dpdk/spdk_pid68957 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69051 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69133 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69214 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69319 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69394 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69536 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69896 00:28:29.591 Removing: /var/run/dpdk/spdk_pid69932 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70376 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70561 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70661 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70783 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70842 00:28:29.591 Removing: /var/run/dpdk/spdk_pid70873 00:28:29.591 Removing: /var/run/dpdk/spdk_pid71175 00:28:29.591 Removing: /var/run/dpdk/spdk_pid71235 00:28:29.591 Removing: /var/run/dpdk/spdk_pid71308 00:28:29.591 Removing: /var/run/dpdk/spdk_pid71702 00:28:29.591 Removing: /var/run/dpdk/spdk_pid71848 00:28:29.591 Removing: /var/run/dpdk/spdk_pid72653 00:28:29.591 Removing: /var/run/dpdk/spdk_pid72780 00:28:29.591 Removing: /var/run/dpdk/spdk_pid72947 00:28:29.591 Removing: /var/run/dpdk/spdk_pid73044 00:28:29.591 Removing: /var/run/dpdk/spdk_pid73331 00:28:29.591 Removing: /var/run/dpdk/spdk_pid73573 00:28:29.591 Removing: /var/run/dpdk/spdk_pid73915 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74101 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74199 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74246 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74347 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74372 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74429 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74590 00:28:29.592 Removing: /var/run/dpdk/spdk_pid74805 00:28:29.592 Removing: /var/run/dpdk/spdk_pid75078 00:28:29.592 Removing: /var/run/dpdk/spdk_pid75347 00:28:29.592 Removing: /var/run/dpdk/spdk_pid75621 00:28:29.592 Removing: /var/run/dpdk/spdk_pid75966 00:28:29.592 Removing: /var/run/dpdk/spdk_pid76097 00:28:29.592 Removing: /var/run/dpdk/spdk_pid76173 00:28:29.592 Removing: /var/run/dpdk/spdk_pid76560 00:28:29.592 Removing: /var/run/dpdk/spdk_pid76621 00:28:29.592 Removing: /var/run/dpdk/spdk_pid76912 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77190 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77559 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77670 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77718 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77776 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77832 00:28:29.592 Removing: /var/run/dpdk/spdk_pid77895 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78085 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78165 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78229 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78318 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78358 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78426 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78577 00:28:29.592 Removing: /var/run/dpdk/spdk_pid78804 00:28:29.592 Removing: /var/run/dpdk/spdk_pid79328 00:28:29.592 Removing: /var/run/dpdk/spdk_pid79987 00:28:29.592 Removing: /var/run/dpdk/spdk_pid80621 00:28:29.592 Removing: /var/run/dpdk/spdk_pid81449 00:28:29.592 Clean 00:28:29.890 20:12:13 -- common/autotest_common.sh@1451 -- # return 0 00:28:29.890 20:12:13 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:29.890 20:12:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.890 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:28:29.890 20:12:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:29.890 20:12:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.890 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:28:29.890 20:12:14 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:29.890 20:12:14 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:29.890 20:12:14 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:29.890 20:12:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:29.890 20:12:14 -- spdk/autotest.sh@394 -- # hostname 00:28:29.890 20:12:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:29.890 geninfo: WARNING: invalid characters removed from testname! 00:28:56.485 20:12:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.044 20:12:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:00.949 20:12:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:03.491 20:12:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:06.046 20:12:50 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:07.963 20:12:52 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:09.880 20:12:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:09.880 20:12:54 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:29:09.880 20:12:54 -- common/autotest_common.sh@1681 -- $ lcov --version 00:29:09.880 20:12:54 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:29:09.880 20:12:54 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:29:09.880 20:12:54 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:09.880 20:12:54 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:09.880 20:12:54 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:09.880 20:12:54 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:09.880 20:12:54 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:09.880 20:12:54 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:09.880 20:12:54 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:09.880 20:12:54 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:09.880 20:12:54 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:09.880 20:12:54 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:09.880 20:12:54 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:09.880 20:12:54 -- scripts/common.sh@344 -- $ case "$op" in 00:29:09.880 20:12:54 -- scripts/common.sh@345 -- $ : 1 00:29:09.880 20:12:54 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:09.880 20:12:54 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.880 20:12:54 -- scripts/common.sh@365 -- $ decimal 1 00:29:09.880 20:12:54 -- scripts/common.sh@353 -- $ local d=1 00:29:09.880 20:12:54 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:09.880 20:12:54 -- scripts/common.sh@355 -- $ echo 1 00:29:09.880 20:12:54 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:09.880 20:12:54 -- scripts/common.sh@366 -- $ decimal 2 00:29:09.880 20:12:54 -- scripts/common.sh@353 -- $ local d=2 00:29:09.880 20:12:54 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:09.880 20:12:54 -- scripts/common.sh@355 -- $ echo 2 00:29:09.880 20:12:54 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:09.880 20:12:54 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:09.880 20:12:54 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:09.880 20:12:54 -- scripts/common.sh@368 -- $ return 0 00:29:09.880 20:12:54 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.880 20:12:54 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:29:09.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.880 --rc genhtml_branch_coverage=1 00:29:09.880 --rc genhtml_function_coverage=1 00:29:09.880 --rc genhtml_legend=1 00:29:09.880 --rc geninfo_all_blocks=1 00:29:09.880 --rc geninfo_unexecuted_blocks=1 00:29:09.880 00:29:09.880 ' 00:29:09.880 20:12:54 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:29:09.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.880 --rc genhtml_branch_coverage=1 00:29:09.880 --rc genhtml_function_coverage=1 00:29:09.880 --rc genhtml_legend=1 00:29:09.880 --rc geninfo_all_blocks=1 00:29:09.880 --rc geninfo_unexecuted_blocks=1 00:29:09.880 00:29:09.880 ' 00:29:09.880 20:12:54 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:29:09.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.880 --rc genhtml_branch_coverage=1 00:29:09.880 --rc genhtml_function_coverage=1 00:29:09.880 --rc genhtml_legend=1 00:29:09.880 --rc geninfo_all_blocks=1 00:29:09.880 --rc geninfo_unexecuted_blocks=1 00:29:09.880 00:29:09.880 ' 00:29:09.880 20:12:54 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:29:09.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.880 --rc genhtml_branch_coverage=1 00:29:09.880 --rc genhtml_function_coverage=1 00:29:09.880 --rc genhtml_legend=1 00:29:09.880 --rc geninfo_all_blocks=1 00:29:09.880 --rc geninfo_unexecuted_blocks=1 00:29:09.880 00:29:09.880 ' 00:29:09.880 20:12:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:09.880 20:12:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:09.880 20:12:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:09.880 20:12:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.880 20:12:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.880 20:12:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.880 20:12:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.880 20:12:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.880 20:12:54 -- paths/export.sh@5 -- $ export PATH 00:29:09.881 20:12:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.881 20:12:54 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:09.881 20:12:54 -- common/autobuild_common.sh@479 -- $ date +%s 00:29:09.881 20:12:54 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727727174.XXXXXX 00:29:09.881 20:12:54 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727727174.BSZGPw 00:29:09.881 20:12:54 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:29:09.881 20:12:54 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:29:09.881 20:12:54 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:09.881 20:12:54 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:09.881 20:12:54 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:09.881 20:12:54 -- common/autobuild_common.sh@495 -- $ get_config_params 00:29:09.881 20:12:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:09.881 20:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:29:09.881 20:12:54 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:09.881 20:12:54 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:29:09.881 20:12:54 -- pm/common@17 -- $ local monitor 00:29:09.881 20:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:09.881 20:12:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:09.881 20:12:54 -- pm/common@25 -- $ sleep 1 00:29:09.881 20:12:54 -- pm/common@21 -- $ date +%s 00:29:09.881 20:12:54 -- pm/common@21 -- $ date +%s 00:29:09.881 20:12:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727727174 00:29:09.881 20:12:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727727174 00:29:09.881 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727727174_collect-cpu-load.pm.log 00:29:09.881 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727727174_collect-vmstat.pm.log 00:29:10.824 20:12:55 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:29:10.824 20:12:55 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:10.824 20:12:55 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:10.824 20:12:55 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:10.824 20:12:55 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:10.824 20:12:55 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:10.824 20:12:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:10.824 20:12:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:10.824 20:12:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:10.824 20:12:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.824 20:12:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:10.824 20:12:55 -- pm/common@44 -- $ pid=83128 00:29:10.824 20:12:55 -- pm/common@50 -- $ kill -TERM 83128 00:29:10.824 20:12:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.824 20:12:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:10.824 20:12:55 -- pm/common@44 -- $ pid=83129 00:29:10.824 20:12:55 -- pm/common@50 -- $ kill -TERM 83129 00:29:10.824 + [[ -n 5027 ]] 00:29:10.824 + sudo kill 5027 00:29:11.097 [Pipeline] } 00:29:11.114 [Pipeline] // timeout 00:29:11.120 [Pipeline] } 00:29:11.134 [Pipeline] // stage 00:29:11.139 [Pipeline] } 00:29:11.153 [Pipeline] // catchError 00:29:11.162 [Pipeline] stage 00:29:11.164 [Pipeline] { (Stop VM) 00:29:11.176 [Pipeline] sh 00:29:11.459 + vagrant halt 00:29:13.999 ==> default: Halting domain... 00:29:19.300 [Pipeline] sh 00:29:19.583 + vagrant destroy -f 00:29:22.129 ==> default: Removing domain... 00:29:23.087 [Pipeline] sh 00:29:23.370 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:23.381 [Pipeline] } 00:29:23.396 [Pipeline] // stage 00:29:23.402 [Pipeline] } 00:29:23.416 [Pipeline] // dir 00:29:23.421 [Pipeline] } 00:29:23.437 [Pipeline] // wrap 00:29:23.443 [Pipeline] } 00:29:23.456 [Pipeline] // catchError 00:29:23.465 [Pipeline] stage 00:29:23.468 [Pipeline] { (Epilogue) 00:29:23.481 [Pipeline] sh 00:29:23.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:29.126 [Pipeline] catchError 00:29:29.128 [Pipeline] { 00:29:29.142 [Pipeline] sh 00:29:29.429 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:29.429 Artifacts sizes are good 00:29:29.440 [Pipeline] } 00:29:29.454 [Pipeline] // catchError 00:29:29.465 [Pipeline] archiveArtifacts 00:29:29.473 Archiving artifacts 00:29:29.591 [Pipeline] cleanWs 00:29:29.604 [WS-CLEANUP] Deleting project workspace... 00:29:29.604 [WS-CLEANUP] Deferred wipeout is used... 00:29:29.611 [WS-CLEANUP] done 00:29:29.613 [Pipeline] } 00:29:29.628 [Pipeline] // stage 00:29:29.634 [Pipeline] } 00:29:29.648 [Pipeline] // node 00:29:29.653 [Pipeline] End of Pipeline 00:29:29.700 Finished: SUCCESS